![]() |
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. |
|
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
![]()
It was neat to watch (we held a pizza event for Tuesday's show).
Here's a some random observations an outsider to the domain of CS might not immediately make: 1) While the answering part was somewhat impressive, the "playing Jeopardy" part was unfair, I thought. Although Watson did have to actuate a mechanical button, it was fed a text version of the question (yes, I know, the answer) as soon as the question screen was revealed. That's cheating, in my book. Watson should have had to read the screen and/or listen to Alex. Instead it got a pre-processed version of the question immediately. Presumably it could also cycle its button much faster than a human could, too. As far as I can tell Watson did not even have any audio input, so my guess is that as soon as it decided it should answer it just started pushing the button, regardless of whether Alex was still talking or not. That removes part of the game. 2) As expected, it did VERY well on questions that were essentially single noun answers, or proper names. Simple fact lookup, even in correlating a few different keywords in a question, isn't really AI, it's just a massive and fast computer (Watson is room-sized). Big Iron, as they say. Roughly 10 seconds to make word associations and find a probable answer. Not bad. 3) It did very poorly on questions which needed a whole phrase for an answer (e.g., "What is he is missing a leg?" as opposed to "What is leg?", Watson's actual answer). The best it could do it seemed were perhaps two word answers (the rhyming category, "Obama's Llamas" as an example). 4) The interesting part is that no human could manually create the fact and word associations that are its database, so this had to be automated and tailored to the task. The developers themselves were surprised at some answers, so this shows how "machine learning" can give surprising results when one cannot conceptualize the entire dataset over which learning happens. In any case, sorta like crazy uncle Benny, Watson would be fun at a party but still a long way from passing a Turing test: http://en.wikipedia.org/wiki/Turing_test Jon. |
Thread Tools | |
Display Modes | |
|
|