In December of 2012 I (Markus) was invited to a workshop on programming language design at the univerity of Austin. I combined this trip with a few podcast recordings. The coolest of them – and yet another high-point of omega tau on-site recordings – was a visit to CAE’s Dallas Simuflite Training Center where I had the chance to spend an hour flying the Dassault Falcon 50EX simulator. Being a pilot, this was obviously extremely cool! I also recorded four more episodes with professors from UT Austin. These include:
A question for the AI episode. (I don’t have Twitter)
This could be too much of an ethics question but if you were able to create fully functioning AI (Independently think, feel, be objective about things, etc..) would that make the maker personally responsible for their creation. Is there any ethics over switching it off if the AI doesn’t play ball.
In other words if a real child had a tantrum you have to put up with it but what is the case if a machine has one… And knows it’s having one?
E-mail me if this doesn’t make too much sense.
I guess this is rather far-fetched. As you will see from the episode once it is published, we are still far away from a “total AI”. We are making only small steps, and in very specific areas. Currently there is no risk of running into the problems you describe. And so I guess none of the researchers thinks much about that.
Markus