Tuesday, January 30, 2007
Sometimes I wonder...am I the only one not suffering postmodernistic exestential angst? Am I the only one who wants to be separated from nature? Who wants to be plugged in? Who doesn't mind pulling away from human toward machine? I don't feel lonely, I don't feel alienated or depressed at all. Why do you? Just wondering...
Saturday, January 20, 2007
Assignment Blog: Week 4
I can't help myself...this week I HAVE to choose to post about the reading by Marie-Laure Ryan, “Immersion versus Interactivity: Virtual Reality and Literary Theory” found at http://www.humanities.uci.edu/mposter/syllabi/readings/ryan.html (um, yes, I just cut and paste that from the syllabus).
I've always found it fascinating just how "real" a "virtual" world can be. I realize that Ryan is comparing it primarily in a literary context (which I'll get to below) but first my own thoughts.
Most of the world has now seen or been forced to see "The Matrix" with the fantasically expressive actor (*cough*) Keanu Reeves (seriously? does the boy have more than 2 facial configurations and one tone of voice?-yet don't get me wrong, I do love his movies). That, of course, would be the modern view of a virtual world becoming so immersive, so exclusive of the actual, so in and of itself Real that it is accepted by all (or nearly all...save those "outside" the Matrix) as the real world. Indeed it plays on the senses, has that special mix of predicatability vs. randomness that Ryan says is important and which mimics real life enough that it becomes indistinguishable (earlier versions of the Matrix not plausible enough because they were too 'ideal' were, in fact, rejected by the captive humans) to the point that humanity is able to live its entire existance in this "Matrix" without ever experiencing life outside of it.
I don't want this blog to wander off on a tangent, but I once spent a considerable amount of time analysing in this case which world was real. Even if a few people disagree, if the majority agrees one version is the real world, who's really right? Don't the few then represent the "insane"? Treating the non-initiated as disposible, trying to destroy majority's world and superimpose the other (admittedly crappier) world they favor, aren't they exactly the terrorists that Agent Smith claims them to be? And that just wanders down the black hole of defining what is really "REAL". fun, fun, fun. :) Not to mention the question of...can we chose our realities?
Anyway *cough* away from that movie...Ryan defines "virtual reality" broadly, exploring it not only in the sense of a completely sensually immersive computer generated world as the term is coming to be associated with, but as any created immersive situation that can be experienced as at least temporarily real.
The primary requirement, of course, is that of (at the risk of repeating this word too often) immersion. That is, the mind has to get involved in the story occuring around it and accept it. There must be a sense of surroundings, situation, etc. In my own mind, immersion should also involve a sense of history of the situation...an understanding of the "plot" if you will of what has previously happened to put the current situation and future events into context. Anyway, as Ryan says, the main thing about immersion is placing onesself into the new reality...not just observing it. "one cannot be both immersed and a removed observer at the same time." This involves the concept of suspension of disbelief. Unless you can accept the new experience as "real" it's not really a reality for you, is it?
Immersion can be greatly aided, of course by sensual experience, but Ryan claims that this can be simulated by mere description...by literature. In fact that seems to be one of her primary explorations in this article...how literature represents a virtual reality of its own, one which existed long before computers were dreamt of.
Of course for literature to be immersive, there is the primary requirement of imagination to help it along since the experience is described in words and has to be interpreted, unlike the "ideal" virtual reality that is so convincing to the senses that it is experienced as real because we have to actually struggle to develop the disbelief that would remove us from it. (Ref: the Matrix)
Another thing aspect of experiencing a virtual reality is that the reality must both have a set of "laws" or rules which are unbreakable and predictable so that intelligent decisions can be made and frustration doesn't set in as well as an element of surprise and unpredictability or interaction becomes rote. Literature can certainly create this as can any constructed world with an intelligent creator.
A third element, however, mentioned in the critique in the latter half of the article is interactivity. This means that you can make decisions and perform actions that actually have consequences and effect future events in the reality. Unfortunately, this is where a literary work often fails. A story that has been written in standard form is decided with a set beginning, middle and end before the first sentence is read. The only hopes for interactivity are to read halfway through and then finish writing the story onesself or some such.
Though there have been attempts to make some sort of fusion of interactivity and text..."choose-your own adventure" and others...now especially with the advent of hypertext where readers can chose to jump from one page of text to the other based on chosing which links to click making it possible for individual readers to have completely different experiences.
However, text stories (and many others...television, and even video games) are still primarily author controlled unless story is not limited, but generates specifically based on reader choices. Ryan calls this "freedom of interactivity" where the experiencer is able to make any choice that does not fundamentally violate the rules of that world. (You cannot just decide to fly or be a pig or whatever unless unaided aviation or animorphing are facts of the world...)
Text is often compltely devoid of this, but increasingly new mediums are developing it more and more. Clearly even the most advanced of our present virtual realities cannot completely achive this however...but that doesn't mean that even without this ability we cannot have a real experience, one that is virtually real ;^)
But the question of how "real" can virtual reality be is already amongst us. There are already fantastically complicated games (granted most are still played on a computer terminal) wherein the graphics, the sounds are fantastically rendered and the world is a self-generating land where many, many players can interact as different characters. Characters who are created and then live "virtual lives" that are suspended but not terminated when the user leaves the terminal. And the user in many ways actually is their character. They interact with others as that person, they talk, and present themselves in that persona much as they put on clothes and go to the grocery store to talk to people in this world. There are stories everywhere of people who meet, fight, fall in love, etc through such platforms before there is ever any face to face meeting.
Because of this, these "virtual lives" are often just as important to players as real lives. And sometimes those boundaries cross...several issue in particular that're coming to the fore right now are the concept of "virtual property", "virtual economy", and "virtual crime."
As for virtual property: these characters that have been built up, their virtual weapons, clothing etc. have so much value in the virutal world that people are willing to pay for them with real money, often buying and selling online for sometimes thousands of dollars. An example of this is given by this article. It describes a woman Veronica Brown and her online personal Simone Stern who create and sell virtual clothing for characters in the game "Second Life" under the label Simone! Designs. Simone sells...Veronica makes about $60,000 a year. But with the ease with which pixels might be copied, she has an interest in protecting her designs...so how is the court to deal with it? Should real world property rights be ascribed to something that only exists as data? At least some countries are already saying yes. But how far should that protection go? That is just one of the questions causing these lines to blur...
There is also the concept of virtual economy. If a player has invested hours and hours of life on doing virtual work to better something that belongs to them virtually and then sells it for virtual currancy that then allows them to buy other virtual items, suddenly not only virtual property but virtual currency has value. (I myself have a thriving Neopets account where I rake in about 50k-75k "neopoints" a day and am currently saving up to buy a Rohane Plushie...quite a valuable little virtual toy that I'll put in my virtual gallery for other players to see and for my Plushie Guild (a bunch of us who collect these virtual toys) to "oohh and awww" over :) ) In fact some have wondered if all this spent virtual work leading to virtual products and wealth might actually be taxed in real money.
These two factors, of course, can lead to real world crime which can turn deadly. There is more than one case in which virtual property has been actually stolen and lead to actual violence and/or murder. Here's one that made headlines.
And then there's the concept of virtual crime, with gangs of characters ganging up on others to kill them, steal their clothes and weapons, even the concept of cyber rape (I believe this account is fictional, but there have been actual cases and actual lawsuits from this). How should this be punished? By other players? By the game makers and moderators? By the real world courts?
Brave new world time...how do we deal with this?
I've always found it fascinating just how "real" a "virtual" world can be. I realize that Ryan is comparing it primarily in a literary context (which I'll get to below) but first my own thoughts.
Most of the world has now seen or been forced to see "The Matrix" with the fantasically expressive actor (*cough*) Keanu Reeves (seriously? does the boy have more than 2 facial configurations and one tone of voice?-yet don't get me wrong, I do love his movies). That, of course, would be the modern view of a virtual world becoming so immersive, so exclusive of the actual, so in and of itself Real that it is accepted by all (or nearly all...save those "outside" the Matrix) as the real world. Indeed it plays on the senses, has that special mix of predicatability vs. randomness that Ryan says is important and which mimics real life enough that it becomes indistinguishable (earlier versions of the Matrix not plausible enough because they were too 'ideal' were, in fact, rejected by the captive humans) to the point that humanity is able to live its entire existance in this "Matrix" without ever experiencing life outside of it.
I don't want this blog to wander off on a tangent, but I once spent a considerable amount of time analysing in this case which world was real. Even if a few people disagree, if the majority agrees one version is the real world, who's really right? Don't the few then represent the "insane"? Treating the non-initiated as disposible, trying to destroy majority's world and superimpose the other (admittedly crappier) world they favor, aren't they exactly the terrorists that Agent Smith claims them to be? And that just wanders down the black hole of defining what is really "REAL". fun, fun, fun. :) Not to mention the question of...can we chose our realities?
Anyway *cough* away from that movie...Ryan defines "virtual reality" broadly, exploring it not only in the sense of a completely sensually immersive computer generated world as the term is coming to be associated with, but as any created immersive situation that can be experienced as at least temporarily real.
The primary requirement, of course, is that of (at the risk of repeating this word too often) immersion. That is, the mind has to get involved in the story occuring around it and accept it. There must be a sense of surroundings, situation, etc. In my own mind, immersion should also involve a sense of history of the situation...an understanding of the "plot" if you will of what has previously happened to put the current situation and future events into context. Anyway, as Ryan says, the main thing about immersion is placing onesself into the new reality...not just observing it. "one cannot be both immersed and a removed observer at the same time." This involves the concept of suspension of disbelief. Unless you can accept the new experience as "real" it's not really a reality for you, is it?
Immersion can be greatly aided, of course by sensual experience, but Ryan claims that this can be simulated by mere description...by literature. In fact that seems to be one of her primary explorations in this article...how literature represents a virtual reality of its own, one which existed long before computers were dreamt of.
Of course for literature to be immersive, there is the primary requirement of imagination to help it along since the experience is described in words and has to be interpreted, unlike the "ideal" virtual reality that is so convincing to the senses that it is experienced as real because we have to actually struggle to develop the disbelief that would remove us from it. (Ref: the Matrix)
Another thing aspect of experiencing a virtual reality is that the reality must both have a set of "laws" or rules which are unbreakable and predictable so that intelligent decisions can be made and frustration doesn't set in as well as an element of surprise and unpredictability or interaction becomes rote. Literature can certainly create this as can any constructed world with an intelligent creator.
A third element, however, mentioned in the critique in the latter half of the article is interactivity. This means that you can make decisions and perform actions that actually have consequences and effect future events in the reality. Unfortunately, this is where a literary work often fails. A story that has been written in standard form is decided with a set beginning, middle and end before the first sentence is read. The only hopes for interactivity are to read halfway through and then finish writing the story onesself or some such.
Though there have been attempts to make some sort of fusion of interactivity and text..."choose-your own adventure" and others...now especially with the advent of hypertext where readers can chose to jump from one page of text to the other based on chosing which links to click making it possible for individual readers to have completely different experiences.
However, text stories (and many others...television, and even video games) are still primarily author controlled unless story is not limited, but generates specifically based on reader choices. Ryan calls this "freedom of interactivity" where the experiencer is able to make any choice that does not fundamentally violate the rules of that world. (You cannot just decide to fly or be a pig or whatever unless unaided aviation or animorphing are facts of the world...)
Text is often compltely devoid of this, but increasingly new mediums are developing it more and more. Clearly even the most advanced of our present virtual realities cannot completely achive this however...but that doesn't mean that even without this ability we cannot have a real experience, one that is virtually real ;^)
But the question of how "real" can virtual reality be is already amongst us. There are already fantastically complicated games (granted most are still played on a computer terminal) wherein the graphics, the sounds are fantastically rendered and the world is a self-generating land where many, many players can interact as different characters. Characters who are created and then live "virtual lives" that are suspended but not terminated when the user leaves the terminal. And the user in many ways actually is their character. They interact with others as that person, they talk, and present themselves in that persona much as they put on clothes and go to the grocery store to talk to people in this world. There are stories everywhere of people who meet, fight, fall in love, etc through such platforms before there is ever any face to face meeting.
Because of this, these "virtual lives" are often just as important to players as real lives. And sometimes those boundaries cross...several issue in particular that're coming to the fore right now are the concept of "virtual property", "virtual economy", and "virtual crime."
As for virtual property: these characters that have been built up, their virtual weapons, clothing etc. have so much value in the virutal world that people are willing to pay for them with real money, often buying and selling online for sometimes thousands of dollars. An example of this is given by this article. It describes a woman Veronica Brown and her online personal Simone Stern who create and sell virtual clothing for characters in the game "Second Life" under the label Simone! Designs. Simone sells...Veronica makes about $60,000 a year. But with the ease with which pixels might be copied, she has an interest in protecting her designs...so how is the court to deal with it? Should real world property rights be ascribed to something that only exists as data? At least some countries are already saying yes. But how far should that protection go? That is just one of the questions causing these lines to blur...
There is also the concept of virtual economy. If a player has invested hours and hours of life on doing virtual work to better something that belongs to them virtually and then sells it for virtual currancy that then allows them to buy other virtual items, suddenly not only virtual property but virtual currency has value. (I myself have a thriving Neopets account where I rake in about 50k-75k "neopoints" a day and am currently saving up to buy a Rohane Plushie...quite a valuable little virtual toy that I'll put in my virtual gallery for other players to see and for my Plushie Guild (a bunch of us who collect these virtual toys) to "oohh and awww" over :) ) In fact some have wondered if all this spent virtual work leading to virtual products and wealth might actually be taxed in real money.
These two factors, of course, can lead to real world crime which can turn deadly. There is more than one case in which virtual property has been actually stolen and lead to actual violence and/or murder. Here's one that made headlines.
And then there's the concept of virtual crime, with gangs of characters ganging up on others to kill them, steal their clothes and weapons, even the concept of cyber rape (I believe this account is fictional, but there have been actual cases and actual lawsuits from this). How should this be punished? By other players? By the game makers and moderators? By the real world courts?
Brave new world time...how do we deal with this?
Thursday, January 18, 2007
The system depends on where you define your boundaries
Just a comment: It seems to me category of the system (open, closed, or "cybernetic") is really just a matter of boundaries. If you have a "system" consisting of a "plant" (engineering model) with feedback and input and output, just include the feedback in the plant model and you end up with a system with just input and output. Zoom out again and include the inputs and outputs as part of the system and then you have a closed system.
I would suggest we DO live in a closed system consisting of the entire universe...
Notice that in general as long as there can be found an element within the system that has the features (input output feedback) the system boundaries can be shrunk down so that the process works in reverse. That's how I see it anyway...
I would suggest we DO live in a closed system consisting of the entire universe...
Notice that in general as long as there can be found an element within the system that has the features (input output feedback) the system boundaries can be shrunk down so that the process works in reverse. That's how I see it anyway...
Assignment Blog: Week 3
Weiner's "Cybernetics and Society" -
Technology has let a lot of djinnees out of bottles...similar to something I noted in the previous week's readings, one of the general themes of technology in the last half of this century or so seems to have been to find ways to monitor, control, and protect ourselves against what was developed before. No doubt the atomic bomb is quite the little djinnee...sure it was made to protect the US (and because we thought Germany was close too) but now everyone (except Iraq) seems to have them, and some of these little emperors (*cough* Kim Jong Il */cough*) don't seem to be the most stable, logical, or trustworthy individuals.
Weiner talks about the the rational behind letting computers dictate warfare. One thing he mentions is that computers are more effective, better decision makers, etc. when they bridge the gap between just being logical processors and between being learners. It's true, I think, that a learning computer can, no doubt, be more effective simply because there will always be some situation that it wasn't programmed for, something outside the scope of its algorithms that requires adaptation for better response, not to mention the possibility of correcting mistakes.
But...it would be kinda nice to have a situation where war became a simple mathematical equation...then our computer and our enemy's computer could just compare stockpiles and troop numbers and efficiencies and so on and decide who would win before the bloodshed ever started. Now the trick would be to get the humans to subjuggate themselves to the computers' decisions...
Which, of course, is the fear of learning computers. If reading the dry, overly thought out tomes of Azimov has taught me nothing else it is this: if computers get smart enough, they might start making decisions we don't like at all...I do recall the short story where the computer decided the best way to fulfill the command to protect the humans was just to take over and not allow them to do anything at all...(or, more scarily, and Out Limits episode in which the computer [also an adherent to the 3 laws of robotics] decided the biggest threat to humanity was humanity itself...)
And yet...how cool to have an entirely alien mind created by us with which we can truly converse as two intelligent beings? Reading the other reading for this week (very cool reading choice, btw) about the "hardware hackers" of the 70s it almost seems to me thats what many of them ultimately dreamed of...why they bothered to flip all those little switches and solder together all those chips and parts.
It almost makes a person wonder if a benevolent computer wouldn't be WORTH submitting to...*L* which considering the Hacker Ethic seems a lot about freedom and anti-establishment would be a pretty ironic offspring.
Anyway, back to Weiner. Feedback. I've taken entire classes with that as the main word in the title (engineer?) and the one thing that I've had drilled into me is that it's the method by which actual output is compared against and corrected toward desired output. Dry definition, no? And yet...
Computers without feedback are nothing but input/output machines. For a given input the output will always be the same (unless altered by errors..."bugs" in the machine). But a learning computer would evaluate the results of an action and compare them against some ideal and adjust. That's how Weiner's chess playering machines work...they adapt, knowing thousands of chess plays, but knowing also that they must have some variety to avoid being predictable by either talented players, or in today's competitive chess playing where machines are built to play against machines, other computers. If a computer wins, it can analyse how and reference it for later. If it loses, that is perhaps even more instructional, because it can learn it's opponent's strategy much as a human would. In fact, Belle (first computer with a master's ranking), Deep Thought, Deep Blue and others have been besting humans now for about 30 years. Here's a nice timeline. Betcha some "hackers" had fun with that!
The other aspect of feedback is that it is something humans also use. We rely on our senses to test our environment and react to it. This leads to other possibilities. Not only can computers utilize feedback for their own purposes...but, as talked about in the early part of this article (well, the first pages in the provided reading, not the numerically first ones) it allows them to do something even more exciting (and scary) and that is to take over the feedback functions of humans as well.
New methods of "hearing" for the deaf, for instance, relating speech via a machine to the sense of touch. Unlike current methods of learning to hear and speak, this would allow them to not only hear those around them without relying on a visual line of sight, but also would allow them to "hear" themselves so they could adjust their own speech to something that sounds more normal to those of us for whom hearing and speaking are connected, not two entirely separate things.
Now taking it one step beyond Weiner...this device could easily be a non-learning machine which simply takes soundwave input and generates predetermined output that corresponds to a calculated algorithm. But couldn't this also be better as a learning machine? It could evaluate how well its signals are recieved and utilized by its user for self adjustment and could then adapt itself to "amplify" sounds that the user seems to not be picking up. Just a thought...
Though this, of course, leads us once again to a scary point...rather than controlling the machines we are again giving up control to them. And when you see all the "Pod" people with their white wires and their pdas and their laptops and their cell phones, etc, etc, etc aren't we already doing that? At worst, are we so intent on becoming cyborgs that we'd just become a brain in a machine? Already most of us rarely walk if we have a real destination in mind. It's easier, faster, allows us to go further if we use something outside of ourselves. And we barely even notice...if you put a frog in boiling water it will jump out, but if you put it in tepid and heat the water slowly it will stay until it dies...and at the same time, there is something seductive, appealing about all of it. We're limited by our physical bodies. Shouldn't we be able to use technology to not only allow those with disabilities to effectively navigate the world of those without, but to elevate us and improve us all? *cough* okay, I don't know where that sermon of mine came from, but I'll leave it.
So there...Wiener in a nutshell - learning machines, feedback, machines in to command, control, and communicate both to control technology and to do a better job than we mere humans ever could...
---------------
And this is completely unrelated, but rather than making a separate post, can I just say how much fun the Levy reading was? Some thoughts:

Community Memory *L* yep! Looks like a bunch of hippies :) Seriously though...check out the links to the April flyer. Very cool. I'm thinking Levy used this as a reference for his book since it contains the bagel and free clinic searches!

The Altair. Seriously...you have to program in BASIC every time just so you can program it? AND you have to build it yourself? *wow* I am so not a hard core hacker. If this is what I'd had to work with, the computer would have never advanced. Ever. If this hadn't hit the trash by now it'd be collecting inches of dust. Here's just a snipet of Assembler code I wrote for an 8800 [Motorola Buffalo, not Altair] in class once:
All that does is make VY=7*VX+120 and VZ=VY/8+25. And that's not even in binary, nor did I have to enter it with switches! eek!
Though, uh, I do have to admit, the to input numbers in VX [that code's not shown] it was done with switches and to output VZ [also not shown] that was done with red LEDs both in binary which then had to be hand translated back to decimal...even after the code is written, it was easier (and more reliable, since switch errors are even easier than addition errors) to just compute by hand than to do it on the Buffalo.

Sol. MUCH more friendly looking. Still, I'm not trading in my laptop any time soon. And I never understood why they chose green text for the old machines. Is green easier? It hurts my eyes...

And the Apple II! Aw, how cute! It even has a joystick :) But then...what's the point of a computer if not to play games *L*
Technology has let a lot of djinnees out of bottles...similar to something I noted in the previous week's readings, one of the general themes of technology in the last half of this century or so seems to have been to find ways to monitor, control, and protect ourselves against what was developed before. No doubt the atomic bomb is quite the little djinnee...sure it was made to protect the US (and because we thought Germany was close too) but now everyone (except Iraq) seems to have them, and some of these little emperors (*cough* Kim Jong Il */cough*) don't seem to be the most stable, logical, or trustworthy individuals.
Weiner talks about the the rational behind letting computers dictate warfare. One thing he mentions is that computers are more effective, better decision makers, etc. when they bridge the gap between just being logical processors and between being learners. It's true, I think, that a learning computer can, no doubt, be more effective simply because there will always be some situation that it wasn't programmed for, something outside the scope of its algorithms that requires adaptation for better response, not to mention the possibility of correcting mistakes.
But...it would be kinda nice to have a situation where war became a simple mathematical equation...then our computer and our enemy's computer could just compare stockpiles and troop numbers and efficiencies and so on and decide who would win before the bloodshed ever started. Now the trick would be to get the humans to subjuggate themselves to the computers' decisions...
Which, of course, is the fear of learning computers. If reading the dry, overly thought out tomes of Azimov has taught me nothing else it is this: if computers get smart enough, they might start making decisions we don't like at all...I do recall the short story where the computer decided the best way to fulfill the command to protect the humans was just to take over and not allow them to do anything at all...(or, more scarily, and Out Limits episode in which the computer [also an adherent to the 3 laws of robotics] decided the biggest threat to humanity was humanity itself...)
And yet...how cool to have an entirely alien mind created by us with which we can truly converse as two intelligent beings? Reading the other reading for this week (very cool reading choice, btw) about the "hardware hackers" of the 70s it almost seems to me thats what many of them ultimately dreamed of...why they bothered to flip all those little switches and solder together all those chips and parts.
It almost makes a person wonder if a benevolent computer wouldn't be WORTH submitting to...*L* which considering the Hacker Ethic seems a lot about freedom and anti-establishment would be a pretty ironic offspring.
Anyway, back to Weiner. Feedback. I've taken entire classes with that as the main word in the title (engineer?) and the one thing that I've had drilled into me is that it's the method by which actual output is compared against and corrected toward desired output. Dry definition, no? And yet...
Computers without feedback are nothing but input/output machines. For a given input the output will always be the same (unless altered by errors..."bugs" in the machine). But a learning computer would evaluate the results of an action and compare them against some ideal and adjust. That's how Weiner's chess playering machines work...they adapt, knowing thousands of chess plays, but knowing also that they must have some variety to avoid being predictable by either talented players, or in today's competitive chess playing where machines are built to play against machines, other computers. If a computer wins, it can analyse how and reference it for later. If it loses, that is perhaps even more instructional, because it can learn it's opponent's strategy much as a human would. In fact, Belle (first computer with a master's ranking), Deep Thought, Deep Blue and others have been besting humans now for about 30 years. Here's a nice timeline. Betcha some "hackers" had fun with that!
The other aspect of feedback is that it is something humans also use. We rely on our senses to test our environment and react to it. This leads to other possibilities. Not only can computers utilize feedback for their own purposes...but, as talked about in the early part of this article (well, the first pages in the provided reading, not the numerically first ones) it allows them to do something even more exciting (and scary) and that is to take over the feedback functions of humans as well.
New methods of "hearing" for the deaf, for instance, relating speech via a machine to the sense of touch. Unlike current methods of learning to hear and speak, this would allow them to not only hear those around them without relying on a visual line of sight, but also would allow them to "hear" themselves so they could adjust their own speech to something that sounds more normal to those of us for whom hearing and speaking are connected, not two entirely separate things.
Now taking it one step beyond Weiner...this device could easily be a non-learning machine which simply takes soundwave input and generates predetermined output that corresponds to a calculated algorithm. But couldn't this also be better as a learning machine? It could evaluate how well its signals are recieved and utilized by its user for self adjustment and could then adapt itself to "amplify" sounds that the user seems to not be picking up. Just a thought...
Though this, of course, leads us once again to a scary point...rather than controlling the machines we are again giving up control to them. And when you see all the "Pod" people with their white wires and their pdas and their laptops and their cell phones, etc, etc, etc aren't we already doing that? At worst, are we so intent on becoming cyborgs that we'd just become a brain in a machine? Already most of us rarely walk if we have a real destination in mind. It's easier, faster, allows us to go further if we use something outside of ourselves. And we barely even notice...if you put a frog in boiling water it will jump out, but if you put it in tepid and heat the water slowly it will stay until it dies...and at the same time, there is something seductive, appealing about all of it. We're limited by our physical bodies. Shouldn't we be able to use technology to not only allow those with disabilities to effectively navigate the world of those without, but to elevate us and improve us all? *cough* okay, I don't know where that sermon of mine came from, but I'll leave it.
So there...Wiener in a nutshell - learning machines, feedback, machines in to command, control, and communicate both to control technology and to do a better job than we mere humans ever could...
---------------
And this is completely unrelated, but rather than making a separate post, can I just say how much fun the Levy reading was? Some thoughts:
Community Memory *L* yep! Looks like a bunch of hippies :) Seriously though...check out the links to the April flyer. Very cool. I'm thinking Levy used this as a reference for his book since it contains the bagel and free clinic searches!
The Altair. Seriously...you have to program in BASIC every time just so you can program it? AND you have to build it yourself? *wow* I am so not a hard core hacker. If this is what I'd had to work with, the computer would have never advanced. Ever. If this hadn't hit the trash by now it'd be collecting inches of dust. Here's just a snipet of Assembler code I wrote for an 8800 [Motorola Buffalo, not Altair] in class once:
ORG $C000 ;program starting addr.
LDDA VX ;load the value in VX to Accu. A
TAB ;transfer the value to B for later use
LSLA ;multiply VX by 2
LSLA ;multiply VX by 4
LSLA ;multiply VX by 8
SBA ;subtract VX (stored in B) from
;value in Accu. A so value = VX*7
ADDA #120 ;add 120 (decimal) to value
STAA VY ;store value in Accu. A to VY
LSRA ;divide VY by 2
LSRA ;divide VY by 4
LSRA ;divide VY by 8
ADDA #25 ;add 25 (decimal to value)
STAA VZ ;store value in Accu. A to VZ
SWI ;return to Buffalo
END
All that does is make VY=7*VX+120 and VZ=VY/8+25. And that's not even in binary, nor did I have to enter it with switches! eek!
Though, uh, I do have to admit, the to input numbers in VX [that code's not shown] it was done with switches and to output VZ [also not shown] that was done with red LEDs both in binary which then had to be hand translated back to decimal...even after the code is written, it was easier (and more reliable, since switch errors are even easier than addition errors) to just compute by hand than to do it on the Buffalo.
Sol. MUCH more friendly looking. Still, I'm not trading in my laptop any time soon. And I never understood why they chose green text for the old machines. Is green easier? It hurts my eyes...
And the Apple II! Aw, how cute! It even has a joystick :) But then...what's the point of a computer if not to play games *L*
Tuesday, January 16, 2007
Can a computer control you?
speaking of Wiener's comments that we all know better than to play with monkey paws or release genies:
|
Monday, January 15, 2007
Barefoot Gen
I value having had the experience of watching this movie, though it's not something I'll probably ever want to do again. (Though I did have a strange urge to watch grave of the Fireflies again). I think something like this is so incredible, so out of the realm of normal human experience that the very idea is almost impossible to grasp except in the abstract for me, or, I imagine, for anyone who has not lived through something similar.
Though the movie seemed a bit documentarish in places (trying a little too hard to introduce all the separate horrors) and melodramatic, there is power in the idea that it's a tale written in the perspective of an individual (and even more so that it's an actual biography). Watching a history channel special is one thing...it's informative, reading eyewitness reports is another, lending some of the horror. Having an experience, even an anime one can impart the feeling a little bit more.
The idea of so much civilian destruction is horrific (though choosing whether soldiers "deserve it" and civilians "are innocent" is another issue I'll leave aside for now) and that is exactly what "The Bomb" presents...it's why the generation before mine saw life as so precarious, knowing that when they did bomb drills in school they were doing pointless preparation for sudden life-ending violence.
So by sharing some of the horror, it can be made more real, emphasize the harshness the bomb represents, hopefully prevent it from being ever used lightly and perhaps from ever being used again. I'm sure I personally still have no idea what it was like and that I take it far too lightly. Still...it doesn't hurt to try educate me and others.
Yet, like the doomsday device (or whatever it was called) in Dr. Strangelove to me it seems actually more inconcievable that such a weapon could exist without eventually being put to use than that the horror of nuclear bombs would not stop them from being used again.
Still, after the two nukes dropped on Japan, some people hope, and I can too that the promise on the Cenotaph at Hiroshima Peace Park holds true:
'Let all souls here rest in peace,
for we shall not repeat the evil'.

http://rosella.apana.org.au/~mlb/cranes/peaceprk.htm
Though the movie seemed a bit documentarish in places (trying a little too hard to introduce all the separate horrors) and melodramatic, there is power in the idea that it's a tale written in the perspective of an individual (and even more so that it's an actual biography). Watching a history channel special is one thing...it's informative, reading eyewitness reports is another, lending some of the horror. Having an experience, even an anime one can impart the feeling a little bit more.
The idea of so much civilian destruction is horrific (though choosing whether soldiers "deserve it" and civilians "are innocent" is another issue I'll leave aside for now) and that is exactly what "The Bomb" presents...it's why the generation before mine saw life as so precarious, knowing that when they did bomb drills in school they were doing pointless preparation for sudden life-ending violence.
So by sharing some of the horror, it can be made more real, emphasize the harshness the bomb represents, hopefully prevent it from being ever used lightly and perhaps from ever being used again. I'm sure I personally still have no idea what it was like and that I take it far too lightly. Still...it doesn't hurt to try educate me and others.
Yet, like the doomsday device (or whatever it was called) in Dr. Strangelove to me it seems actually more inconcievable that such a weapon could exist without eventually being put to use than that the horror of nuclear bombs would not stop them from being used again.
Still, after the two nukes dropped on Japan, some people hope, and I can too that the promise on the Cenotaph at Hiroshima Peace Park holds true:
'Let all souls here rest in peace,
for we shall not repeat the evil'.
http://rosella.apana.org.au/~mlb/cranes/peaceprk.htm
Thursday, January 11, 2007
Assigment Blog: Week 2
A number of themes are explored in Paul Edwards chapters 2 and 3 of The Closed World: Computers and the Politics of Discourse in Cold War America, but the primary purpose is to explore the development of computing resulting from and during the post-World War II era. This involves not just the actual form of the technology itself but also the reasons it took the directions it took in terms of political and military pressures and the decisions of individuals.
There can be no doubt that availability of funding and resources is one of the most important deciding factors in determining how technology develops and more specifically WHICH technology develops. However, it is not the only factor. As Vanevar Bush said in the other reading for this week, one important thing is the collusion of ideas. Information must get from those who have it or have discovered it to those who can use it and develop it further. Military projects often allow for such things by employing a wide variety of scientific specialties to work on the same project. On the other hand, the level of secrecy imposed can severely cripple the sharing of information. Other, less obvious things can also play a role. For instance (pg 97) in the case of SAGE explored in this reading where one of the reasons analog technology was not explored in more detail to try improve its accuracy rather than looking to digital was because MIT threatened to withdraw unless digital was used. (This reminds me a lot of the BetaMax vs. VHS and DVD vs. BlueRay contests where the winner was not necessarily the better technology but was the one that was better supported by other technology, public opinion, or executive choices.)
Chapter 2 highlights this evolution of analog computing vs. digital computing. Analog was originally used most extensively because it was seen as more reliable and had more useful forms of input and output. Digital on the other hand was originally hard to accomplish in real time and not all that reliable, but could do much more accurate/precise computation. In Chapter 3, we meet SAGE/Whirlwind, the military funded compromise that was supposed to blend the two in order to solve the problem of "air defense."
Before examining the readings conclusions about the effectiveness of SAGE however, I think another important thing to consider is the effect of human irrationality and perception on technology.
Technology as we know is not developed linearly. New inventions do not come into being because it was preordained that they should, but because there was a need, an idea, and some entity willing to invest in the resources. This isn't necessarily an efficient means of development since the solution may not be the best one, the need may not be the most pressing, and since the funds may or may not be rationally allocated, but alas, that's how it is. (If only engineers ruled the world!)
In the case of the time frame presented, this development occurred because of real human insecurity about wartime technology, particularly air missiles and the nuclear bomb. The latter had made a real impression on the human psyche, leading to the idea that technology wasn't just our plaything that we controlled and could play nice with...rather it was something UNcontrollable and threatening that we somehow needed to find a technological solution to. In fact, air raids were almost impossible to detect and launch any real defense against at the current technology level and with whole cities being reasonable targets, life itself seemed precarious in the era following the development of areial warfare. Some statistics given in the reading (pg 86) suggest that at best only 30% of attacks could even be somewhat defensible, and it would be almost impossible to mitigate even 10% of the damage of a nuclear attack.
Because of this very real fear, it lead to implausible assumptions and paranoia about the USSR. Government fear and political motivation lead to the overestimating of Soviet resources, spurring American furvor to develop increasing defense (and offense) against it. (pg 87-88) Propaganda used to recruit and engender support was often strongly worded and misleading.
And yet at the same time, though we wanted to rely on technology to protect us against its kin, we were also suspicious, afraid to give it free rein without at least some human oversight (which, considering the level of reasoning intelligence of a computer is only that at which it is programmed to consider various inputs, is probably a good thing).
So SAGE was just one solution to the problem...an attempt to take all various inputs and monitor war situations and launch necessary defense. In actuality, we find at the end of the reading that the truth is SAGE never did work very well...it was easily jammed by the numerous input and results showing its success were often fudged. (Okay if anyone is still reading this...I HIGHLY recommend watching Pentagon Wars starring Kelsey Grammar. Reminds me very strongly of this situation!)
In fact, the reading speculates (with supporting evidence) that it was likely SAGE was never really intended to work that well (kinda like the Star Wars of the Reagan era, I would argue), and that the real veiled intent was that any real warfare would take the place of America making a first strike on Russia. (pg 110)
Yet...SAGE wasn't an altogether failure. It helped to mitigate the fears that birthed it by providing some sort of seeming control to dissipate the helplessness that the nuclear and air raid technology created. Not to mention that even when it proved not very workable, some of the resultant technology became a firmly ingrained part of future technology. Modern computers would not be able to do the things they do without multiplexing, networking, parallel digital logic, and many of the other things Edwards gives credit to SAGE and Whirlwind for advancing.
There can be no doubt that availability of funding and resources is one of the most important deciding factors in determining how technology develops and more specifically WHICH technology develops. However, it is not the only factor. As Vanevar Bush said in the other reading for this week, one important thing is the collusion of ideas. Information must get from those who have it or have discovered it to those who can use it and develop it further. Military projects often allow for such things by employing a wide variety of scientific specialties to work on the same project. On the other hand, the level of secrecy imposed can severely cripple the sharing of information. Other, less obvious things can also play a role. For instance (pg 97) in the case of SAGE explored in this reading where one of the reasons analog technology was not explored in more detail to try improve its accuracy rather than looking to digital was because MIT threatened to withdraw unless digital was used. (This reminds me a lot of the BetaMax vs. VHS and DVD vs. BlueRay contests where the winner was not necessarily the better technology but was the one that was better supported by other technology, public opinion, or executive choices.)
Chapter 2 highlights this evolution of analog computing vs. digital computing. Analog was originally used most extensively because it was seen as more reliable and had more useful forms of input and output. Digital on the other hand was originally hard to accomplish in real time and not all that reliable, but could do much more accurate/precise computation. In Chapter 3, we meet SAGE/Whirlwind, the military funded compromise that was supposed to blend the two in order to solve the problem of "air defense."
Before examining the readings conclusions about the effectiveness of SAGE however, I think another important thing to consider is the effect of human irrationality and perception on technology.
Technology as we know is not developed linearly. New inventions do not come into being because it was preordained that they should, but because there was a need, an idea, and some entity willing to invest in the resources. This isn't necessarily an efficient means of development since the solution may not be the best one, the need may not be the most pressing, and since the funds may or may not be rationally allocated, but alas, that's how it is. (If only engineers ruled the world!)
In the case of the time frame presented, this development occurred because of real human insecurity about wartime technology, particularly air missiles and the nuclear bomb. The latter had made a real impression on the human psyche, leading to the idea that technology wasn't just our plaything that we controlled and could play nice with...rather it was something UNcontrollable and threatening that we somehow needed to find a technological solution to. In fact, air raids were almost impossible to detect and launch any real defense against at the current technology level and with whole cities being reasonable targets, life itself seemed precarious in the era following the development of areial warfare. Some statistics given in the reading (pg 86) suggest that at best only 30% of attacks could even be somewhat defensible, and it would be almost impossible to mitigate even 10% of the damage of a nuclear attack.
Because of this very real fear, it lead to implausible assumptions and paranoia about the USSR. Government fear and political motivation lead to the overestimating of Soviet resources, spurring American furvor to develop increasing defense (and offense) against it. (pg 87-88) Propaganda used to recruit and engender support was often strongly worded and misleading.
And yet at the same time, though we wanted to rely on technology to protect us against its kin, we were also suspicious, afraid to give it free rein without at least some human oversight (which, considering the level of reasoning intelligence of a computer is only that at which it is programmed to consider various inputs, is probably a good thing).
So SAGE was just one solution to the problem...an attempt to take all various inputs and monitor war situations and launch necessary defense. In actuality, we find at the end of the reading that the truth is SAGE never did work very well...it was easily jammed by the numerous input and results showing its success were often fudged. (Okay if anyone is still reading this...I HIGHLY recommend watching Pentagon Wars starring Kelsey Grammar. Reminds me very strongly of this situation!)
In fact, the reading speculates (with supporting evidence) that it was likely SAGE was never really intended to work that well (kinda like the Star Wars of the Reagan era, I would argue), and that the real veiled intent was that any real warfare would take the place of America making a first strike on Russia. (pg 110)
Yet...SAGE wasn't an altogether failure. It helped to mitigate the fears that birthed it by providing some sort of seeming control to dissipate the helplessness that the nuclear and air raid technology created. Not to mention that even when it proved not very workable, some of the resultant technology became a firmly ingrained part of future technology. Modern computers would not be able to do the things they do without multiplexing, networking, parallel digital logic, and many of the other things Edwards gives credit to SAGE and Whirlwind for advancing.
Sunday, January 7, 2007
Blogger...
Hmm, *pokes blogger*
so, I know have yet another blog journal thing. never used this one before, but since my favs weren't on the list, here I am.
interesting. Not exactly friendly for editing the layout is it? tempted to change the background, but I'm strangely drawn to the black...I also think I need to switch to writing in the HTML field only because it's really annoying me not to be able to code my own tags. Ah, yes, this is nicer!
So why did I choose blogger? It's a long story...an Epic one you might say. If interested click here and check out both versions (though seriously, tell me if you do, because I'm morbidly curious if anyone will, or if this pointless little post is being skipped entirely.)
anyway, hello to those who wander here. this is a blog set up specifically for CHID 370, so anything I post here is (in my strange logic) somewhat connected to that class in some way or other or at least is meant to be read by you, my fellow classmates and/or class facilitators (or if you're not, why are you here?)
I promise not to try make all posts so rambling...but I do tend to be a little stream of consciousnesslike, not to mention that I'm testing this place out, *pokes blogger again* and I don't want to hang out online somewhere without marking it territorily as mine which requires throwing a few words up on the screen.
Who am I?
I'm an Electrical Engineering Graduate student studying robotic control systems and trying to finish my degree. What am I doing in this class? Quite frankly, I'm fleeing from classes with insanely complicated math to hang out with people and talk about the effect of all this tech anyway. I would also say I'm trying to find a change of scenery, but since this class ended up in Sieg...*L* I have an office on the first floor of this building (or a cubicle anyway) and I'll be programming robots on the 4th floor all semester, so...mission not accomplished. Anyway, I'm hoping for something fun and interactive and the first day was a good sign ;^) Sorry, Dr. Thurtle, you didn't scare me away!
And this is a picture of me:

Yes, I do wear those sunglasses on my head 99.9% of the time. This particular picture was taking on Hawaii (the big island) on a lava flat. Believe it or not, I'm holding my hand all awkward like that because there's a massive bleeding gash on the underside where I cut it on the lava (it's sharp!) and am now dripping all over the rocks. More than you wanted to know? *L* too bad. You get to see me smiling through the pain!
so, I know have yet another blog journal thing. never used this one before, but since my favs weren't on the list, here I am.
interesting. Not exactly friendly for editing the layout is it? tempted to change the background, but I'm strangely drawn to the black...I also think I need to switch to writing in the HTML field only because it's really annoying me not to be able to code my own tags. Ah, yes, this is nicer!
So why did I choose blogger? It's a long story...an Epic one you might say. If interested click here and check out both versions (though seriously, tell me if you do, because I'm morbidly curious if anyone will, or if this pointless little post is being skipped entirely.)
anyway, hello to those who wander here. this is a blog set up specifically for CHID 370, so anything I post here is (in my strange logic) somewhat connected to that class in some way or other or at least is meant to be read by you, my fellow classmates and/or class facilitators (or if you're not, why are you here?)
I promise not to try make all posts so rambling...but I do tend to be a little stream of consciousnesslike, not to mention that I'm testing this place out, *pokes blogger again* and I don't want to hang out online somewhere without marking it territorily as mine which requires throwing a few words up on the screen.
Who am I?
I'm an Electrical Engineering Graduate student studying robotic control systems and trying to finish my degree. What am I doing in this class? Quite frankly, I'm fleeing from classes with insanely complicated math to hang out with people and talk about the effect of all this tech anyway. I would also say I'm trying to find a change of scenery, but since this class ended up in Sieg...*L* I have an office on the first floor of this building (or a cubicle anyway) and I'll be programming robots on the 4th floor all semester, so...mission not accomplished. Anyway, I'm hoping for something fun and interactive and the first day was a good sign ;^) Sorry, Dr. Thurtle, you didn't scare me away!
And this is a picture of me:
Yes, I do wear those sunglasses on my head 99.9% of the time. This particular picture was taking on Hawaii (the big island) on a lava flat. Believe it or not, I'm holding my hand all awkward like that because there's a massive bleeding gash on the underside where I cut it on the lava (it's sharp!) and am now dripping all over the rocks. More than you wanted to know? *L* too bad. You get to see me smiling through the pain!
Subscribe to:
Comments (Atom)