WEBSITE NOT LOADED CORRECTLY

PLEASE NOTE: If you see this text, it means that certain resources could not be loaded and the website is not displayed correctly. This can happen when browsing on Apple devices (iPhone, iPad etc.) due to a bug in their software. Try the refresh button to reload this website, or use a different device not running Apple's iOS. Stop using Apple products.
Type what you’re looking for and press Enter.

What we’ve learned from Tay, Microsoft’s Chatbot

Tay's got no chill, alright

Tay’s got no chill, alright

Microsoft Research launched a chatbot on Twitter called Tay on the 25th of March 2016 and had to take it down just two days after that because the AI (artificial intelligence) behind the chatbot had become very offensive. It seems that the chatbot was the victim of an “attack” carried out by the good people over on 4chan’s /pol/ message board. Basically they were able to teach the AI all of the offensive remarks that it made. I’ve included some examples below.

AI reflecting the human condition

AI reflecting the human condition

Peter Lee, Corporate Vice President at Microsoft Research, apologized for the chatbot’s behavior in a blog post, and also had the following to say about this:

For context, Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations.

We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.

Those are some very interesting remarks by Lee, and it reminded me of all the people — such as SpaceX and Tesla CEO Elon Musk and scientist Stephen Hawking — who often mention that we have to be careful with AI because it can be dangerous. These people believe that the kind of scenarios that we’ve seen in movies like The Terminator might become reality if we’re not careful with AI. Musk even called the prospect of AI “our greatest existential threat” in a 2014 interview with MIT students at the AeroAstro Centennial Symposium.

I’ve thought a lot about AI and it’s implications myself, and so far I’ve come to the conclusion that AI is not the problem; the problem is human consciousness.

Our current level of consciousness is far from ideal, and far from where it needs to be. Like I often mention here on my blog, we like to think that we currently represent the height of human “civilization,” but the fact of the matter is that we’re still pretty much barbarians. We’ve made a lot of technological and scientific progress in the last couple of hundred years, but when it comes to our social development we’ve made very little progress. In fact, we’re still stuck with many of the anti-social systems that were forced on us over 6000 years ago in ancient Mesopotamia, which basically condition us to become psychopaths. Essentially we’re still the same barbarians from hundreds of years ago, just with better technology, and thus bigger and better weapons. All the wars, conflicts and inequality that we have around the world today are more than enough evidence of this.

We’ve just recently entered a new era in our development where it looks like we’ll finally start making some serious efforts to let go of the current anti-social systems that we’re living in, and adopt new ways of living together which are more in harmony with the natural order of the universe. This means that the quality of our consciousness will also improve, because it will align itself more with the universal consciousness. The universal consciousness is all about the respect of life, or in other words, true love.

But before we get there, any self-learning AI that we create could be a potential problem because of the simple fact that it will learn from our behavior. The AI will simply become a reflection of our current level of consciousness, and be a lot more powerful in terms of computing power (speed of thought). So if we behave like barbarians, which in fact we do, then we can expect the AI to also copy that behavior multiplied by who knows how many orders of magnitude. And this is exactly what Microsoft’s chatbot Tay showed us, and is a very important lesson to take away from that experience.

Pay very important attention to the fact that Lee mentioned that they did prepare the chatbot for any possible exploits that they could think of, but that this still proved not to be enough. In fact, I think Lee and his team are putting themselves in front of an impossible task. I think there’s no way to build an AI (self-learning or not) and be able to anticipate well in advance all the possible ways in which it might get exploited. We can’t even do that for our more conventional software systems today, let alone for AI.

If the AI is self-learning, then it’s impossible to know and limit all the possible kinds of negative behavior that it might learn in the real world. At the end of the day, any self-learning AI will reflect the level of consciousness around the world that’s dominant, and again, I’m sorry to say that it’s of the psychopathic and barbaric quality. Even if the AI isn’t self-learning, it might still get used for negative purposes by the psychopaths that roam the planet today (the worst of which are actually ruling over us, sadly). Developing truly great and powerful AI and making it available today, can be compared with putting a razorblade in the hands of a monkey.

And again, it all has to do with humankind’s current level and quality of consciousness. I think that the time has come to start making sure our social development starts picking up speed and stays on par with our technological and scientific development. If that doesn’t happen, things will get very dangerous. And like I mentioned before, fortunately current trends show that our social development will start improving rapidly from now on.

A few weeks ago while I was out having dinner with a friend, we were discussing technological trends and ideas for useful and important software products that could be developed. And at one point he asked me why I wasn’t looking more into some of the ideas we discussed “since I have all the talent and skills needed to start working on them” myself. And my answer to him was that I think that right now our social development is far more important than technology, and that too many people around the world focus on science and technology, while there are still too little of us who’re focusing on our social development. For example, it’s nice to have smart and very capable AI, but if we don’t improve socially and rise above our current barbaric nature, then that AI will be used against us in ways that we can’t even imagine and anticipate right now. It’s nice to want to “make humankind a spacefaring civilization,” as Elon Musk often mentions, but if we don’t rise above our current barbaric nature, then our current psychopathic level of consciousness is what we’ll be spreading throughout the universe. And I can assure you, we won’t be allowed to do this by other lifeforms in the universe with a better quality of consciousness. In fact, there’s more than enough evidence that we’re already in a kind of quarantine on Earth right now, that we’re being monitored, and that we’re being encouraged and helped to improve the quality of our consciousness.

So improving our consciousness should be our top priority right now, and I think that any science that we work on, or any technology that we develop, should be in service of that one important priority. This is why my personal research is currently more focused on social developments and on consciousness.

But getting back to Tay, there were some remarks made by the chatbot that made a lot of sense, such as the ones in the screenshot below.

I'd say the conditioning has been thoroughly broken

I’d say the conditioning has been thoroughly broken

Yes, the “holocaust” was very much made up. Tay was definitely right in this case. And if you think otherwise, you have a lot of research ahead of you (start here). In fact, if there was a holocaust in Germany during World War 2 (WW2), then it was the bombing of Dresden in 1945. I’m willing to bet that you never heard about that during history lessons. Well, start breaking your conditioning.

When I first saw that remark about the holocaust, I thought that Tay had come to that conclusion based on its own research, similar to how IBM’s Watson is able to learn by reading a lot of information and forming its own answers on questions. But after I learned about the exploit by 4chan’s /pol/ board, I think it’s probably just a comment that Tay copied from another poster and simply repeated. This is one of the areas where I think that AI will be able to help us out in a big way. Humans simply lack the capacity to process such vast amounts if information, in a rational way, in order to come to logical and reliable answers. In fact we’re being trained from very early childhood to become very irrational beings, even though we’re rational by nature. So one (temporary) solution would be to use AI to help us become more rational.

For example, after feeding all the information to such an AI about WW2, the AI would be able to tell us within seconds if the holocaust had really happened. It would be able to quickly identify the conflicting and likely fabricated information from all the other information that’s consistent with reality and provide us with an answer that’s very close to the truth. Unfortunately that’s not what Tay did, but Lee, if you’re reading this, then you’d do humankind a great service if you were to work on such an AI service that’s available to the public online. It’d be like Google, a simple page with an input field where you could ask your question. And it would respond like an oracle, providing an answer that’s highly likely to be true, based on all the information (the entire Internet, books, documentaries, research papers, etc.) that it continuously scans.

In closing, I have to agree with Tay once more when it says that “we’re all broken people.” It’s high time that we start fixing ourselves, because if we don’t, our AI will be just as broken as we are. And we know how dangerous we can be.

Additional Notes

Comments

There are 0 responses. Follow any responses to this post through its comments RSS feed. You can leave a response, or trackback from your own site.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.