The term “crowdsourcing” is an interesting term to talk about in the field of history. The term can be used throughout our modern vocabulary, meaning the practice of obtaining information or input into a task or project by enlisting the services of a large number of people. Crowdsourcing typically is utilized throughout the internet, which brings it into contact with the field of digital humanities. The concept of multiple people working on a single topic sounds good in theory, but does present a variety of ethical dilemmas and quandaries in the digital public sphere.
When discussing crowdsourcing knowledge, I immediately think of Wikipedia, where its history of contradictory evidence and opinionated conclusions prevent accurate answers on a given historical topic. For starters, Wikipedia’s model attracts highly-opinionated zealots, partisans, and extremists from all parts of the socio-political spectrum. These individuals aim to have their “version of the facts,” which makes the digital humanities amplify nationalistic or extremist tastes and morality on nominally historical topics. In each of these cases the result is that Wikipedia skews away from commonly-accepted academic, historical, scientific, or cultural dogma on any given topic, and toward the extremes.
Similarly, there is also the issue of the opinions of the masses. Now, that doesn’t mean other opinions and viewpoints don’t matter. However, not everyone is a historian. People naturally have different beliefs, opinions, and viewpoints on different topics. While obtaining different viewpoints are good for history as a whole, the popularity of the masses can be problematic for the digital humanities. We can use the Civil War as an example. If we are crowdsourcing a Wikipedia project about the Civil War, what happens when editors who write about the fight for slavery come into conflict with editors who claim it was about states’ rights? Despite the benefits for massive transcriptions, crowdsourcing risks inaccuracy because of the massive amounts of opinions and biases.
Finally, crowdsourcing models like Wikipedia lack an objective measure of the success of an online encyclopedia. Crowdsourcing focuses on the number of pages, views, and editors rather than accuracies and references when it comes to completeness. Any initiative or changes that affect the priorities of crowdsourcing projects are swiftly defeated since it would reduce the number of pages, thus affecting the perception of “completeness.” They will not prevent anonymous editing, because that would reduce the perception of user engagement. With all that in mind, the inability to even institute safety practices, such as flagged revisions, arguably diminishes both page views and editors.
While the problem isn’t crowdsourcing itself, I can argue it’s how sites like Wikipedia make crowdsourcing an issue. This begs the question of how we as digital historians should handle crowdsourced knowledge that may be inaccurate or misrepresentative, as well as what we do when technology can fabricate sources that don’t actually exist. For one, safety measures should be put in place for individuals to flag down false, opinionated, or incorrect data entries. Another method is to find ways to work around these problems with fact checkers or editors being employed or active on these pages. Lastly, a heavy emphasis on the identification of “deepfakes” needs to be employed, as well as what is deemed to be considered a “deepfake.” To be brief, deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. These “deepfakes” provide powerful leveraging techniques from machine learning and artificial intelligence to manipulate or generate digital historical content. Since the “deepfake” technologies and methods have become increasingly convincing and available to the public, methods must be in place to counter these attempts to deceive others.
In conclusion, crowdsourcing isn’t entirely awful when done correctly, but there are a lot of red flags that need to be addressed. Like traditional history, there are loads of inaccuracies in its work as well. However, there are parameters in place to counter and get rid of historical biases and inaccuracies. Crowdsourcing in the digital humanities must understand the fine line between these inaccuracies versus philosophical talking points. Once that is achieved, crowdsourcing ideas and projects can become more reliable and widespread for digital historians to utilize.