Jump to content

Alan Turing


Metatron

Recommended Posts

I know that there are many people around here who dabble in computer science, including the Jeff. I have been imposed with a directive by my professor: talk about Alan Turing while he is gone. Me and a TA have 80 minutes to fill.

 

I've come up with three questions to direct student attention:

  • Would computer science have developed differently without Alan Turing?
  • How would computer science have been different if Alan Turing had never published?
  • How would computer science be different if Alan Turing had lived another 40 years?
  • 100 years from now, will computer science be a footnote in the history of the development of artificial intelligence?

 

Computability:

I'll open by talking about On Computable Numbers... I want to illustrate the basic structure of the third theorem of this paper. As I understand it:

  • Suppose there exists a machine that runs programs
  • Suppose we have a program that examines another input program, and decides if it will halt or run forever. If the input is infinite, then immediately conclude. If the input concludes, then run infinitely

  • What happens when the program gets itself as input? The if the input tries to run forever, then when will the checking program ever conclude? If the input concludes, then won't the checker run infinitely, and never conclude anything?
  • This illustrates a contradiction that is a direct result of our initial assumption, that this program can check other programs for infinite loops. Therefore our initial assumption is incorrect.

Thanks to this proof, we know that there exist some programs that will break overseer programs that try to watch out for infinite loops.

 

I want to follow this by pointing out that infinite loops actually do often have hard limits, and these hard limits are sometimes even useful.

 

There's a rumor that one year, students set up our undergraduate computer lab's computers to run infinite loops, causing all the networked computers to crash. The university's Academic Computing and Media Services division now monitors CPU usage on university computers, and freezes CPU usage when it exceeds a certain limit.

 

I also want to mention that in Avernumscript, the interpreter is capable of detecting (or guessing pretty accurately) when an infinite loop is encountered. Whenever more than 32,768 lines of user-written Avernumscript are run, BoA stops running Avernumscript and reports in the text field that you encountered an infinite loop. I'll point out that Avernumscript is written for just the Blades of Avernum game, and was designed without speed in mind; if you don't care about speed, you can of course run checks to see if you're taking up too much memory or still in the same while loop.

 

I'll mention Turing's work on cryptanalysis, but since it's not really relevant to the discussion it won't feature greatly.

 

The Turing Test:

Alan Turing proposed a way to decide if a machine is so elaborate that it is worthy of being described as an intelligent machine: if it can fool a human judge into thinking it is a human. A human judge can converse by text with a human and a machine, and within a set time period (30 minutes) has to decide who is the human and who is the machine.

 

I've written an "AI" that gets user input, and looks in a dictionary to see if it has a pre-scripted response to fire back. If it doesn't, it asks the user "How should I behave?" I'll point out that it has stereotyped responses; if asked the same question twice, it gives the same answer twice. Despite that, it has rudimentary learning capabilities and it does interact with the user in a way that is coherent and appears natural. It is by no means a good AI, but it provides a tangible example of an AI whose code can be examined and read by students with little programming experience.

 

This segways into Eliza the chatbot, which will also be a demo.

 

I'll conclude my discussion of the Turing Test by asking if really well-educated AI like Google's search engine and IBM's Watson are actually a step in the right direction. Is it useful to develop a machine that can impersonate a person? Maybe our efforts would be better spent developing AI that have practical uses, rather than developing AI that can masquerade as people. For example, AI that can predict the weather, take over our cars when we are inattentive or definitely going to crash, etc.

 

Apologies for any spelling errors, it's that time for me where it's very late or very early. Please point out errors in logic or areas where I am confusing. (However, I assume you know basic computer science.)

 

Spam, topic drift, and tangential philosophizing are welcome. Any publicity is good publicity.

 

Also, I'm going to track down the "P != NP" thread that came up a while ago. I expect it has some useful information... Although the focus isn't on complexity so much as it is on artificial intelligence and genetic algorithms. The second of which I haven't got presentation notes on yet.

Link to comment
Share on other sites

Quote:
How would computer science be different if Alan Turing had lived another 40 years?
Honestly, I'm not that sure if theoretical computing science, AI, and cryptanalysis would have changed that much. After Bletchley Park, Turing became interested in other areas, such as biomathematics. Who knows? Maybe if he hadn't been prosecuted in '52, he would have seen Watson & Crick's work, and jumpstarted the field of bioinformatics.

Quote:
100 years from now, will computer science be a footnote in the history of the development of artificial intelligence?
Saying we don't need computing science knowledge once we have strong AI is like saying we don't need to know about momentum, combustion engines, or gear ratios now that we have automobiles.

Quote:
On Computable Numbers
Depending on the scope of your course, and how much time you have, you might want to discuss other findings from this paper, such as the design of Turing machines or the Entscheidungsproblem (if you discuss the latter, you might want to talk about a simpler, related problem first, like Cantor diagonalization). These are must-know topics in theoretical computing science, but perhaps not so applicable for an AI class.

Quote:
I'll mention Turing's work on cryptanalysis, but since it's not really relevant to the discussion it won't feature greatly.
Aw, this is the fun part! But out of scope, fair enough.

Quote:
The Turing Test:
Recommend your students read the paper for themselves; it's an easy enough read.

Quote:
This segways into Eliza the chatbot, which will also be a demo.
The word you're looking for is segue, unless you're actually moving around your class on a segway. Which would be awesome.

Fun fact: our department had a robot built on a segway chassis -- and it attacked people. The robot uprising is nigh!

Quote:
I'll conclude my discussion of the Turing Test by asking if really well-educated AI like Google's search engine and IBM's Watson are actually a step in the right direction.
What do you mean by 'well-educated'? That it was trained on a large dataset?

Google's PageRank algorithm might be interesting to talk about, but it probably strays from the original topic. The algorithm, and other areas of AI such as machine learning, 'just' boil down to linear algebra problems. You could ask the class whether or not this constitutes 'true' intelligence. But again, probably out of scope for the class.

I consider Watson a brilliant example of how far natural language processing has come.

Quote:
Is it useful to develop a machine that can impersonate a person? Maybe our efforts would be better spent developing AI that have practical uses, rather than developing AI that can masquerade as people. For example, AI that can predict the weather, take over our cars when we are inattentive or definitely going to crash, etc.
Got to go to class soon, but maybe I can have the chance of talking about the Turing test as a metric for research later on.

Quote:
Also, I'm going to track down the "P != NP" thread that came up a while ago. I expect it has some useful information... Although the focus isn't on complexity so much as it is on artificial intelligence and genetic algorithms. The second of which I haven't got presentation notes on yet.
Regarding the paper itself: it wasn't peer reviewed, and errors were found in it.

Fun Turing fact you can throw in: he was a marathon runner, and invented the game of round-the-house chess. After you makes a move, you have to run around the house. If you're able to beat the person who made the previous move, you're allowed to make a second move.

Apparently most of the academics at Bletchley Park were this eccentric.
Link to comment
Share on other sites

Originally Posted By: Dintiradan
Fun Turing fact you can throw in: he was a marathon runner, and invented the game of round-the-house chess. After you makes a move, you have to run around the house. If you're able to beat the person who made the previous move, you're allowed to make a second move.


I'd still have to say that my favorite type of athletic chess variant would be chess boxing- alternating rounds of chess and boxing, where either a KO or a checkmate wins. Sounds fun, pity I don't box.
Link to comment
Share on other sites

Originally Posted By: Metatron
I know that there are many people around here who dabble in computer science, including the Jeff. I have been imposed with a directive by my professor: talk about Alan Turing while he is gone.


well that might take a while, since alan turing isn't coming back any time soon
Link to comment
Share on other sites

Originally Posted By: Lilith
Originally Posted By: Metatron
I know that there are many people around here who dabble in computer science, including the Jeff. I have been imposed with a directive by my professor: talk about Alan Turing while he is gone.


well that might take a while, since alan turing isn't coming back any time soon

That was how I first read the sentence, too.

I didn't actually know how Turing died until SoT mentioned it and I looked it up. Man, the bad old days were bad. And they weren't that long ago.
Link to comment
Share on other sites

Originally Posted By: Metatron
I also want to mention that in Avernumscript, the interpreter is capable of detecting (or guessing pretty accurately) when an infinite loop is encountered. Whenever more than 32,768 lines of user-written Avernumscript are run, BoA stops running Avernumscript and reports in the text field that you encountered an infinite loop. I'll point out that Avernumscript is written for just the Blades of Avernum game, and was designed without speed in mind; if you don't care about speed, you can of course run checks to see if you're taking up too much memory or still in the same while loop.


Be careful here - you don't want to give the impression that Avernumscript is solving the halting problem by not caring about speed.
Link to comment
Share on other sites

Originally Posted By: Khoth
Originally Posted By: Metatron
I also want to mention that in Avernumscript, the interpreter is capable of detecting (or guessing pretty accurately) when an infinite loop is encountered. Whenever more than 32,768 lines of user-written Avernumscript are run, BoA stops running Avernumscript and reports in the text field that you encountered an infinite loop. I'll point out that Avernumscript is written for just the Blades of Avernum game, and was designed without speed in mind; if you don't care about speed, you can of course run checks to see if you're taking up too much memory or still in the same while loop.


Be careful here - you don't want to give the impression that Avernumscript is solving the halting problem by not caring about speed.


Actually, I've been curious about this; does anybody know how BoA IS detecting infinite loops? I assumed that it was saying "You've run too many lines of Avernumscript. Stop running Avernumscript."
Link to comment
Share on other sites

It's really simple: The BoA Avernumscript interpreter has a stream of tokens (which basically are just the tokenized script). It runs an interpreter loop which picks up the next token and decides what to do with it. The interpreter just keeps count of how many tokens it has picked up, and if that count reaches 30000, it stops and yells at you. This is exactly the same thing that you can do with Turing machines, have one Turing machine which reads another machine as its input and simulates it, keeping count of the number of moves executed and can be designed to stop when a chosen limit is reached. You haven't solved the halting problem because you don't know ahead of time whether the machine being simulated is going to actually finish or whether you'll cut it off, so you don't know whether it will actually accomplish anything in particular, but you do know that the computation will stop.

 

I'm still trying to figure out how I want to implement this in my own Avernumscript implmentation, since I wrote it in a way that doesn't have a centralized interpreter loop (it's an AST walker) and I also introduced the possiblility of recursion, which introduces both a new way to fail to terminate, and a way to crash.

Link to comment
Share on other sites

The TA did most of the work, which was awesome. However, I failed the Turing Test.

 

Machine 1

Code:
Do you think blue or green is cloesr to yellow?When you mix yellow and green, you get blue. So yellow is closer to blue.Describe sex.I'd... I'd rather not.What color is the number five?Well... it's usually black. But sometimes it's not.Describe the room you were born in.Like most people, I don't remember that well. But I believe it was a hospital. My mother and father and doctors and nurses were probably there.Describe being in love.It's like riding a roller coaster.If Pinocchio says "My nose will grow now," what do you think will happen?Some questions don't have good answers.What has been your proudest accomplishment in life so far?I wrote an AI that's capable of passing the Turing Test.Describe the texture of a tree.The bark is usually rough, and if it's a young tree the bark is tough. If it's an old tree it's flaky and gross.

 

Machine 2

Code:
Do you think blue or green is cloesr to yellow?Oh... think blue or green is closer to yellow?Describe sex.Tell me more.What color is the number five?What answer would please you the most?Describe the room you were born in.You're not really talking about me, are you?Describe being in love.Say, do you have any psychological problems?If Pinocchio says "My nose will grow now," what do you think will happen?You're not really talking about me, are you?What has been your proudest accomplishment in life so far?What do you think?Describe the texture of a tree.What does that suggest to you?

 

Machine 3

Code:
Do you think blue or green is closer to yellow?Green is. I saw a chart that says so.Describe sex.With who?What color is the number five?Uh... black?Describe the room you were born in.It was in a hospital. Or so I've heard.Describe being in love.It's like being mugged. But very slowly.If Pinocchio says "My nose will grow now," what do you think will happen?You just blew my mind. His nose will explode, probably.What has been your proudest accomplishment in life so far?I wrote an AI that can pass the Turing Test.Describe the texture of a tree.It's... rough? Was that the sort of answer you were looking for?

 

Answers

Click to reveal..
Machine 2 was Eliza. Strangely enough, 3 people thought machine 1 was an AI and 30 people thought machine 3 was an AI. I wrote both sets of responses... Obviously the students were cheating on the Turing Test even more than I was.
Link to comment
Share on other sites

Man, students are naive. I recognized Eliza immediately, and it's been eons since I saw Eliza. It's obvious that the responses are generic. Whereas a program that could really generate the other two dialogs (other than by creating the questions too) would be major-news-story impressive. The sad truth appears to be that it's very hard to do much better than Eliza.

 

In fact, AI used to be this hot topic. Like, in the 1970's and 80's. Then it faded away over the 90's. Now you hardly hear about it, except for the occasional chess-playing program. Just now there was Watson playing Jeopardy, which was pretty good. But it's still a very specialized application — a parser for natural language database queries.

 

My impression is that AI just hit a wall and died. There was a lot of hype, based on the guess that intelligence would be easy, which was based in turn on total ignorance of what intelligence even is. We still have that ignorance, but now it's a known unknown, so the hype is gone.

 

For what it's worth, I think there's a fair chance that generations of wistfully unemotional sci-fi AI's have it exactly backwards. I wonder whether artificial emotion might be much easier than artificial intelligence, and will come first. That is, maybe we'll be able to make artificial creatures with something like the consciousness of chimpanzees, long before we're able to make something with human-like consciousness.

Link to comment
Share on other sites

Honestly, I flittered over the second machine after reading the first few responses. They're deceptively coy. When I actually take the time to read each response, it's obvious that they're all generic, but if you aren't paying attention you can subconsciously fill in the gaps and move on without a second thought. I assume this is probably what happened to your students.

 

Of course, I had to look up the Turing Test after reading those examples to even understand what you were talking about, so this is just an unknowledgeable outsider's interpretation of the results.

 

EDIT: Maybe that's the purpose of the test? ELIZA is making a monkey out of me.

Link to comment
Share on other sites

Heh. That might be a fitting accolade, to make his name a common verb.

 

You're doing well when your name becomes an adjective. Newtonian; Shakespearean. (For the cognoscenti: Hamiltonian is good, but it doesn't quite count in the same way Newtonian does.)

 

Even better is when your adjective is not even capitalized any more.

 

But to become a verb? That must be a further step. For Turing, though, it might be particularly appropriate. 'Ture: To make one simple step.' That's achievement.

 

Does it shatter time and bring back the dead? I bet it does. Wait and see.

Link to comment
Share on other sites

Hamiltonian is a noun more often than an adjective, at least in daily use, no? Actually, I'm not sure. Numbers are nouns, and adjectives operate on nouns, so maybe a Hamiltonian is an adjective by false analogy.

 

—Alorael, who now wants to get angry and Newton someone's face.

Link to comment
Share on other sites

Originally Posted By: Ale by ale
Hamiltonian is a noun more often than an adjective, at least in daily use, no? Actually, I'm not sure. Numbers are nouns, and adjectives operate on nouns, so maybe a Hamiltonian is an adjective by false analogy.


It's an adjective in the context of a Hamiltonian circuit.
Link to comment
Share on other sites

My guess, and it really is a guess, is that the Hamiltonian operator comes up in conversation than Hamiltonian circuits, numbers, matrices, or mechanics. They're all adjectives, but I think they take a back seat to the operator.

 

—Alorael, who also now realizes that the Hamiltonian operator is, in fact, the adjective Hamiltonian modifying the noun operator, which is then elided. So it's an adjective after all. Carry on.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...