Posts

NGO/NPO Reflection

 I got the feedback for my presentation a little late, and I found out about this just now so I'll quickly go over it. Looking at the feedback, I got the impression that my NGO/NPO for the most part lacked an explanation for everything. It was too vague. I only briefly introduced my NGO/NPO's main goals were and failed to deliver the details. This can be said for my funding ideas. Looking at the other presentations, I did feel a lack of creativity for my activities and methods for funding. I also didn't incorporate much data from my survey. I guess my NGO/NPO wasn't really a good fit in terms of inserting my results. I need to consider that more next time. To be honest, if I had another chance to do this presentation all over again, I'd want to put much more time and effort into it. There were a lot of flaws I noticed after presenting, and the feedback helped me realize how naive I was. Learning from this failure, I'll do my best to keep striving.

NGO/NPO Presentation

My NGO takes place in a future where AI robots (or Androids) exist normally like us humans do and can think, feel, and communicate to the extent that we can't tell them apart. The NGO I made for my presentation goes by the name of D.E.M, and it stands for Define and Equalize Machines. I took the abbreviations "D.E.M" from Deus ex machina, which is a Latin calque from Greek and means 'God from the machine'. Although a bit different from the original definition, the word is often used to represent a God of machines or technology.  The main goal of this NGO is to bestow human rights to AI robots. I believe that AI should get its rights gradually like humans did but in a different way. That would be considering its individual functions as its uses develop. That way, we account for both the diversity of AI and its specific capabilities; we can avoid giving rights that are unsuited for some AI, like a right to family life for Siri or Alexa. Though, I believe rights like

What makes AI, AI?

 In the last post, I went over a few survey results that I took a while back. In the post, I said that I'd go over what factors consider something to be AI since some people don't have a clear understanding. So here it is.  A test called the Turing test was made in the year 1950 by English computer scientist Alan Turing. It was  originally called the   imitation game  and it  is a test of a machine's ability to exhibit intelligent behavior  equivalent to, or in a similar form to, that of a human. Basically, it determines whether a computer can "think" or not. The Turing test is where   a remote human interrogator, within a certain time frame, must distinguish between a computer and a human subject based on their replies to various questions asked by the interrogator. By means of a series of such tests, a computer’s success at “thinking” can be measured by its probability of being misidentified as the human subject.  Computers that have the ability to pass this tes

Survey Summary

 First, a big thanks to all the people who cooperated with me in the survey conducted on survey monkey. I was able to get a total of 36 respondents that helped me get a deeper understanding of the public opinion of AI.                                                                                                                                                             The respondents ranged from the age of 19 to 24 and were a relatively young audience. What surprised me the most in this survey was the diversity in opinions. The divided opinions were fairly equal on both sides. It goes without saying that we all have a device equipped with AI considering Siri is on all Apple devices. But, a few people answered that they didn't own any AI devices. This shows that some people aren't aware of what AI is, or doesn't know what to consider AI. (I would like to talk about this in a future post.) Most respondents answered that they first came across the word "Artificial Int

3 kinds of AI

 There are 3 kinds of AI. Weak AI, strong AI, and Artificial superintelligence. Weak AI is only able to do tasks that are given to it, and can’t deal with unexpected problems. It is said that almost all AI that exist today are weak AI. This kind consists of Roomba, Siri, Alexa, and  Strong AI is AI that thinks for itself.  The last kind of AI; artificial superintelligence, is said to be able to do anything and greatly exceeds human intelligence.  It is said to be created shortly after the creation of strong AI.  As of now, AI is created by humans, but when strong AI is made, they will then start to research AI, and then boom, artificial superintelligence. Since AI can process and calculate more than humans, this will happen in a flash.  Once this is done, AI will probably wipe out the human race for various reasons. There is no reason to let a downgraded version of themselves still exist. This has been speculated by many scientists.  But there’s also another side to this theory. It

A quick overview of Artificial Intelligence (AI).

 AI has been studied for decades and is still one of the vaguest subjects in Computer Science. This partly due to how large and fuzzy the subject is. AI ranges from machines that can think by itself, to search algorithms that are used to play board games. It has applications in nearly anything in our daily lives.  The term artificial intelligence was first coined by American mathematician and computer scientist John McCarthy in 1956 when he held the first academic conference on the subject. Though the term didn’t exist, the concept had been recognized much before. In American electrical engineer Vannevar Bush’s seminal work, he introduced a system which amplifies people’s own knowledge and understanding. Five years later (1950) English Mathematician Alan Turing published a paper entitled “Computing Machinery and Intelligence” which opened the doors to the field that would be called AI. In this paper, he proposes to consider the question “Can machines think?” and argues that there is no

Short Introduction

"Artiicial Intelligence" (AI) is a term we've all gotten a bit familliar with. They're useful indeed, but we need to think more about the consequences of uncautious development. It is said that in the year 2045, AI will become superior humans. This is a theory put together by Ray Kurzweil, an inventor, and futurist in the US. He also published a book in the 1990s packed full of predictions of the future. It turned out that 86% of the predictions made became a reality. Even so, you might think that 2045 is too soon. Normally, we predict how much progress we’re going to make based on past results. But this is not the case with AI because of The Law of Accelerating Returns, this where technological innovations in the past intertwine with new ones to speed up the advancements in technology. This means progress is made not linearly, but exponentially . So for example, if you take 10 years from the year 2000, and compare it to the next 10 years, the latter has more evolutio