Short Introduction

"Artiicial Intelligence" (AI) is a term we've all gotten a bit familliar with. They're useful indeed, but we need to think more about the consequences of uncautious development.

It is said that in the year 2045, AI will become superior humans. This is a theory put together by Ray Kurzweil, an inventor, and futurist in the US. He also published a book in the 1990s packed full of predictions of the future. It turned out that 86% of the predictions made became a reality. Even so, you might think that 2045 is too soon. Normally, we predict how much progress we’re going to make based on past results. But this is not the case with AI because of The Law of Accelerating Returns, this where technological innovations in the past intertwine with new ones to speed up the advancements in technology. This means progress is made not linearly, but exponentially. So for example, if you take 10 years from the year 2000, and compare it to the next 10 years, the latter has more evolution in technology. Many other notable figures (Stephen Hawking, Elon Musk, Steve Wozniak, etc.) have mentioned the possibility of destruction caused by AI. 

So, is a doomsday scenario an actual possibility? 




Buchanan, M. (2008, July). The law of accelerating returns. Nature Physics. 

https://www.nature.com/articles/nphys1010


AI Open Letter. (2015, January). Future of Life Institute.

https://futureoflife.org/ai-open-letter/


Comments

Popular posts from this blog

NGO/NPO Reflection

Survey Summary

What makes AI, AI?