Jan 05, 2020

SourceCheck

tldr: Check the project out on Github or the Firefox add-ons directory.


I recently listened to the Future of Life Institute podcast with Max Tegmark and Yuval Noah Harari, the author of several popular science books, most notably, Sapiens. On this podcasts they talked about political polarization and how technology is being used to create more and more bias. Here is Harari describing this in an excerpt from the podcast:



Or if you think about the fake news epidemic, basically what’s happening is that corporations and governments are hacking us in their service, but the technology can work the other way around. We can develop an antivirus for the mind, the same way we developed antivirus for the computer. We need to develop an antivirus for the mind, an AI system that serves me and not a corporation or a government, and it gets to know my weaknesses in order to protect me against manipulation.


At present, what’s happening is that the hackers are hacking me. They get to know my weaknesses and that’s how they are able to manipulate me. For instance, with fake news. If they discover that I already have a bias against immigrants, they show me one fake news story, maybe about a group of immigrants raping local women. And I easily believe that because I already have this bias. My neighbor may have an opposite bias. She may think that anybody who opposes immigration is a fascist and the same hackers will find that out and will show her a fake news story about, I don’t know, right wing extremists murdering immigrants and she will believe that.


And then if I meet my neighbor, there is no way we can have a conversation about immigration. Now we can and should, develop an AI system that serves me and my neighbor and alerts us. Look, somebody is trying to hack you, somebody trying to manipulate you. And if we learn to trust this system that it serves us, it doesn’t serve any corporation or government. It’s an important tool in protecting our minds from being manipulated.



They mentioned that it would be nice to have an antivirus for the mind, a tool that could analyze the media that you consume, whether that be a news article, a blog post, a youtube video, a meme, a tweet, a books or any other form of media, and inform you of any kind of false, or biased information. It wouldn't prevent you from consuming the media, because it's not 1984 we're after, but instead it would be a tool that allowed you to make more informed decisions and keep check on any kind of bias that may be being pushed on to you by big tech platforms, like Facebook and Google. Here's a cool interactive demo that really highlights how technology can push and nudge an individual towards more extreme biases. A big theme of the Future of Life Institute is that technology has no moral compass, it is not inheritedly good or evil, it is just a tool. Currently, our data is being used and analyzed to find the most optimal piece of media that will keep us engaged. This is usually something that confirms our biases and makes us more strongly rooted in our beliefs. This creates a cycle, that as it continues, creates a more extremely biased individual. What if that data, your personal data, was used for you, not against you? That's what I think the antivirus for the mind would be.

Anti-virus for computers have been around for a while, they are tools that protects your computer against malicious programs that try to steal your personal information or "hack" your computer. An antivirus for the mind would, like a anti-virus for your computer, protect your mind from being "hacked" by social media, big tech companies, or political groups. The concept of someone "hacking" your mind sounds kind of crazy, like something out of a Matrix movie. In this case though, I'm going to define a "hack" of an individual's mind as anything that influences an individual's thoughts or behaviors without their knowing.


Here are some features that I'm imagining the antivirus for your mind might have:


  • Local to your computer, that way your data is never sent off to a third party
  • Open source so that it could be free to use and audited for security
  • Non-intrusive, it shouldn't block you from accessing content or developing biases, but instead just inform you of any bias or outside influences, and let you make your own decision.

  • Obviously, to analyze all of an individuals media consumption would be a daunting task. I'm thinking that a small start could be a browser extension that parses webpages, and checks the domain to see if it is reputable, checks links on the webpages to see if they are reputable and uses Natural Language Processing to detect bias. I'm also imagining a more meta-level tool that could analyze tweets on your timeline, posts on your Facebook, recommendations from Google News, or videos on Youtube and can give you an overview of the content that you are being recommended and into what category they fall. This would be helpful to not only detect if extreme biases are being pushed upon you but also if a certain product or genre is more actively recommended to you.

    For a much, much better description of this problem and more insight into U.S. history, world history, evolutionary psychology, political theory, and neuroscience checkout Tim Urban's WaitButWhy series.



    Update ~ Jan 29, 2020: I created a new project, SourceCheck, that is a baby-step in this direction. The first iteration of SourceCheck checks if the current website's domain and all of the links on the page are credible. You can get it for Firefox here.