Thursday, January 18, 2007

Assignment Blog: Week 3

Weiner's "Cybernetics and Society" -

Technology has let a lot of djinnees out of bottles...similar to something I noted in the previous week's readings, one of the general themes of technology in the last half of this century or so seems to have been to find ways to monitor, control, and protect ourselves against what was developed before. No doubt the atomic bomb is quite the little djinnee...sure it was made to protect the US (and because we thought Germany was close too) but now everyone (except Iraq) seems to have them, and some of these little emperors (*cough* Kim Jong Il */cough*) don't seem to be the most stable, logical, or trustworthy individuals.

Weiner talks about the the rational behind letting computers dictate warfare. One thing he mentions is that computers are more effective, better decision makers, etc. when they bridge the gap between just being logical processors and between being learners. It's true, I think, that a learning computer can, no doubt, be more effective simply because there will always be some situation that it wasn't programmed for, something outside the scope of its algorithms that requires adaptation for better response, not to mention the possibility of correcting mistakes.

But...it would be kinda nice to have a situation where war became a simple mathematical equation...then our computer and our enemy's computer could just compare stockpiles and troop numbers and efficiencies and so on and decide who would win before the bloodshed ever started. Now the trick would be to get the humans to subjuggate themselves to the computers' decisions...

Which, of course, is the fear of learning computers. If reading the dry, overly thought out tomes of Azimov has taught me nothing else it is this: if computers get smart enough, they might start making decisions we don't like at all...I do recall the short story where the computer decided the best way to fulfill the command to protect the humans was just to take over and not allow them to do anything at all...(or, more scarily, and Out Limits episode in which the computer [also an adherent to the 3 laws of robotics] decided the biggest threat to humanity was humanity itself...)

And yet...how cool to have an entirely alien mind created by us with which we can truly converse as two intelligent beings? Reading the other reading for this week (very cool reading choice, btw) about the "hardware hackers" of the 70s it almost seems to me thats what many of them ultimately dreamed of...why they bothered to flip all those little switches and solder together all those chips and parts.

It almost makes a person wonder if a benevolent computer wouldn't be WORTH submitting to...*L* which considering the Hacker Ethic seems a lot about freedom and anti-establishment would be a pretty ironic offspring.

Anyway, back to Weiner. Feedback. I've taken entire classes with that as the main word in the title (engineer?) and the one thing that I've had drilled into me is that it's the method by which actual output is compared against and corrected toward desired output. Dry definition, no? And yet...

Computers without feedback are nothing but input/output machines. For a given input the output will always be the same (unless altered by errors..."bugs" in the machine). But a learning computer would evaluate the results of an action and compare them against some ideal and adjust. That's how Weiner's chess playering machines work...they adapt, knowing thousands of chess plays, but knowing also that they must have some variety to avoid being predictable by either talented players, or in today's competitive chess playing where machines are built to play against machines, other computers. If a computer wins, it can analyse how and reference it for later. If it loses, that is perhaps even more instructional, because it can learn it's opponent's strategy much as a human would. In fact, Belle (first computer with a master's ranking), Deep Thought, Deep Blue and others have been besting humans now for about 30 years. Here's a nice timeline. Betcha some "hackers" had fun with that!

The other aspect of feedback is that it is something humans also use. We rely on our senses to test our environment and react to it. This leads to other possibilities. Not only can computers utilize feedback for their own purposes...but, as talked about in the early part of this article (well, the first pages in the provided reading, not the numerically first ones) it allows them to do something even more exciting (and scary) and that is to take over the feedback functions of humans as well.

New methods of "hearing" for the deaf, for instance, relating speech via a machine to the sense of touch. Unlike current methods of learning to hear and speak, this would allow them to not only hear those around them without relying on a visual line of sight, but also would allow them to "hear" themselves so they could adjust their own speech to something that sounds more normal to those of us for whom hearing and speaking are connected, not two entirely separate things.

Now taking it one step beyond Weiner...this device could easily be a non-learning machine which simply takes soundwave input and generates predetermined output that corresponds to a calculated algorithm. But couldn't this also be better as a learning machine? It could evaluate how well its signals are recieved and utilized by its user for self adjustment and could then adapt itself to "amplify" sounds that the user seems to not be picking up. Just a thought...

Though this, of course, leads us once again to a scary point...rather than controlling the machines we are again giving up control to them. And when you see all the "Pod" people with their white wires and their pdas and their laptops and their cell phones, etc, etc, etc aren't we already doing that? At worst, are we so intent on becoming cyborgs that we'd just become a brain in a machine? Already most of us rarely walk if we have a real destination in mind. It's easier, faster, allows us to go further if we use something outside of ourselves. And we barely even notice...if you put a frog in boiling water it will jump out, but if you put it in tepid and heat the water slowly it will stay until it dies...and at the same time, there is something seductive, appealing about all of it. We're limited by our physical bodies. Shouldn't we be able to use technology to not only allow those with disabilities to effectively navigate the world of those without, but to elevate us and improve us all? *cough* okay, I don't know where that sermon of mine came from, but I'll leave it.

So there...Wiener in a nutshell - learning machines, feedback, machines in to command, control, and communicate both to control technology and to do a better job than we mere humans ever could...

---------------

And this is completely unrelated, but rather than making a separate post, can I just say how much fun the Levy reading was? Some thoughts:



Community Memory *L* yep! Looks like a bunch of hippies :) Seriously though...check out the links to the April flyer. Very cool. I'm thinking Levy used this as a reference for his book since it contains the bagel and free clinic searches!



The Altair. Seriously...you have to program in BASIC every time just so you can program it? AND you have to build it yourself? *wow* I am so not a hard core hacker. If this is what I'd had to work with, the computer would have never advanced. Ever. If this hadn't hit the trash by now it'd be collecting inches of dust. Here's just a snipet of Assembler code I wrote for an 8800 [Motorola Buffalo, not Altair] in class once:


ORG $C000 ;program starting addr.
LDDA VX ;load the value in VX to Accu. A
TAB ;transfer the value to B for later use
LSLA ;multiply VX by 2
LSLA ;multiply VX by 4
LSLA ;multiply VX by 8
SBA ;subtract VX (stored in B) from
;value in Accu. A so value = VX*7
ADDA #120 ;add 120 (decimal) to value
STAA VY ;store value in Accu. A to VY
LSRA ;divide VY by 2
LSRA ;divide VY by 4
LSRA ;divide VY by 8
ADDA #25 ;add 25 (decimal to value)
STAA VZ ;store value in Accu. A to VZ
SWI ;return to Buffalo
END


All that does is make VY=7*VX+120 and VZ=VY/8+25. And that's not even in binary, nor did I have to enter it with switches! eek!

Though, uh, I do have to admit, the to input numbers in VX [that code's not shown] it was done with switches and to output VZ [also not shown] that was done with red LEDs both in binary which then had to be hand translated back to decimal...even after the code is written, it was easier (and more reliable, since switch errors are even easier than addition errors) to just compute by hand than to do it on the Buffalo.



Sol. MUCH more friendly looking. Still, I'm not trading in my laptop any time soon. And I never understood why they chose green text for the old machines. Is green easier? It hurts my eyes...



And the Apple II! Aw, how cute! It even has a joystick :) But then...what's the point of a computer if not to play games *L*

1 comment:

rmarslander said...

You have produced a lot of information on the readings. The photos and the clips are a very nice touch along with the example of your own programming. Why do you think feedback is important and in what ways do you think computers will continue to advance in the ability of giving feedback? If computers are more effective in making decisions than why are they limited in what they are assigned to do. What line if any is there between human and machine? It does seem reasonable to use technology to assist people with disabilities. Awesome job keep up the good work.
-Ryan