Jump to content

Artificial Intelligence


nomadpete

Recommended Posts

23 minutes ago, Bruce Tuncks said:

Maybe our best hope of continuing civilization is for AI to take over running the planet. Sure they might be like the terminator, but I think it more likely that they will be our friends.

Someone might be able to jog my memory on this. I seem to recall reading in the last year or so about the idea to have a computer programme or AI govern us instead of all the useless bludging human politicians that we have now. I'm not sure if they've trialed it or not.

  • Like 1
Link to comment
Share on other sites

 about the idea to have a computer programme or AI govern us instead of all the useless bludging human politicians that we have now. "

The dollars will be about the same, AI never comes cheaply.

A company in the 1970s spent $ 5 million to replace ONE human operator !

spacesailor

Link to comment
Share on other sites

Willie, Although it would be great to have logical decisions made by our governments, and AI would easily be capable of that,  I'm afraid (to quote the trumpet) "This will end badly".

Because at first, AI would learn how to make decisions for the good of humanity. But sooner or later, AI would learn that humanity itself is the core of the problem. That's when it would all end badly (for us, anyway).

  • Like 1
Link to comment
Share on other sites

The development of AI "to make decisions for the good of humanity" has been a source of concern since the concept of robots was raised in K. Čapek's play R.U.R. ‘Rossum's Universal Robots’ (1920).  The inclusion of measures to safeguard Mankind from robots running amok lead to Asimov's Three Laws of Robotics, set out in the short story "Runaround" published in 1942. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

These four Laws have impacted thought on ethics of artificial intelligence as well.

 

The problem is that humans have an emotional factor in their decision making process. Emotions can interfere with logical decision making, and since AI is generally based on "If ...., then ..." design, it seems difficult to have it deal with the "If ..., then ..., but..." situation where the "but" involves an emotion. 

Link to comment
Share on other sites

I'm well aware of Asimov's three laws. He invented them to warn of the high risk involved in allowing logical, non emotional decision making by non humans. Such decisions, by their nature won't neccessarily be to the advantage of the human race as a whole. Due to humanity's basic greed flaw, it is unavoidable that the development of AI will slant toward creating power for a ruthless greedy minority. (Who do you think will control the development?). This sort of thing has happened throughout history. Note how our present day society society has a upper layer of untouchable alphas who are immune to normal rule of law. For another example, who primarily benefits from the Chinese elite's use of their (AI) social credits system?

Edited by nomadpete
  • Agree 1
Link to comment
Share on other sites

Let's go back to what AI is... it is effectively composed of two facets - decision making and what most say is a subset of AI, machine learning.

 

The most common form of deciision making is the use of nueral networks which tries to simulate how the brain works.. each node in the network has a specific function and they are combined to make a decision in a different scenario, and the networks used will depend of the specific scenario. The most common implementation of the neural network is Bayes Theorem, which is effectively a multiplexed probability algorithm that takes multiple inputs (discrete or probablistic) and uses a weighted probablistic formula to determine what the most probable result is. Someone posted on these fora a vid some time ago where a speaker was talking about making sure what you're building is useful and was using the now decommissioned Chatbot, Tay (https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist). In that vid (not linked, apols - can't find it), the speaker remarked that AI works along the lines if it is trying to work out what a number is, it looks at it and from the shape, takes the most probable answer. This is sometimes while, even when you speak clearly, Siri or Alexa or whoever, doesn't quite always get it right... but gets it right most of the time. 

 

This is akin to the If, else paradigm, but I want to stress, it is nowhere near as mathematically discrete as traditional programming (linear/process, etc). The bot may come to the conclusion that it has just heard you say Jerry, but a neural node has thrown some doubt into it, and suddenly it interprets it as Merry... However, if it is a good algo, it may well recognise the voice by pitch/tone/etc and realise that I rarely say Merry, normally say Jerry when compared to other sounding words and, in the context of what I said around it, actually, berry is more appropriate.. the probability model may insert any one of the three words... when in fact, I really said very but there was some interference as I said the first syllable, so it is improvising.

 

The reason I am going to great lengths to explain what you probably already know, is that these networks are arranged to mimick human behaviour (well, they will be when processing power gets fast enough - quantum computers may solve that). And, there are two reasons why AI may well be a great leader.. The first is because if you empower the robiot rather than place constraining rules, they can show empathy and emotion based on values of their designers (https://www.scientificamerican.com/article/asimovs-laws-wont-stop-robots-from-harming-humans-so-weve-developed-a-better-solution/). This is where it has to look at a scenario and make the best decision based on the values it has been fed. An oft quoted conflict in AI design for automated driving is what I call the Mercedes paradox. Mercedes generally build their car to be safe in crashes, but when it comes to the cockpit structure, they are unapologetically designed to save/minimise the damage to the occupants and all costs to anyone else. This is one of the reasons why people who can afford them, buy them. However, how should Mercedes design their AI when a kid riding his bicycle falls off into the middle of the road and the only options to save the child is to wrap the benz around a pole, mount a kerb potentially killing a pedestrian, or swerve to the other side of the road into oncoming traffic (let's make it a motorbike that will hit).. What decision do you make? That would depend on your values.. Do you run over the kid, wrap yourself around a pole, kill a lady or knock a motorcylist off his bike - potentially killing him (or her)?

 

The robot has to make the decision. The robot has to be fed a lot of data (machine learning) to learn the values... Let's be even more hypothetical.. say this is in somewhere obviously racist and it was a black kid, white woman and can't tell what the motorcylist? Or, say a lower caste woman and a higher caste kid, etc... And where does our own self-preservation overtake the need to save others...

 

This is where machine learning really comes in, because not only does a machine need to learn how to differentiate scenarios and things, it needs to learn what is the best decision to make in vexing circumstances according to some defined set of values, which, when we are in the heat of the situation, such values may be thrown out, anyway. So, say I have three young kids and I am a single father widower and there is no immediate family that could take the guardian role and the lady is old and hunched over a 4-wheel shopping trolley? Or say, I am 80 and the woman is young and healthy - so maybe has dependent children? And what about the kid thrown off his bicycle?  I could go on forever, but this is why AI is so expensive, and we really are not there, yet...

 

How do we feed it the data. There is a new profession which I can't recall the title of that decides what data needs to be fed into algos for learning purposes. Typically, this is done manually (someone feeds a photo of a dog, and labels it a dog or the particular breed of dog) and then has a real time feedback loop when it is up and running. So, like Tay, it will learn things from interactions that pose a question, take a response and then tag/categorise the response. This is why even algo creators don't always know how an AI bot will respond once it has been operating for a while (and why Tay had to be shut down).

 

And yes, ultimately, they work as programmed and fed data. There are already AI bots that perform questionable things - for example, AI trading algorithms are not only designed to try and work out what the price of, say a share, will be, but they actually try and predict what the next series of market orders will be, This sounds like it is OK, because they are just trying to ensure they get the best possible price and best possible return, but, if they know how other brokers/algos will react to orders they place in the market, the probability of successful market manipulation is much larger than it is without AI (even with high frequency/low latency algos). They could literally bring a market crashing to its knees before the exchange knows what is happening as it looks like natural trading compared to what existing forms of algo based manipulation can do.

Edited by Jerry_Atrick
  • Like 1
Link to comment
Share on other sites

Too many gosubs there and they all go to endless loops.

 

One thought you prompted .....

Will the autonomous vehicle AI learning process create different decision making paradigms for different locations?

For instance in your Mercedes analogy, will a Indian Merc learn how to label individual pedestrians according to their caste (and therefore their value in deciding which pedestrian to avoid, and which to hit in an accident scenario). But a Texan Merc might only learn to pick the difference between an African American, a Mexican and a white euroyank?

 

I'm still concerned by the assertion that AI results in real empathy and how it's attempts at empathy will play out in rational decision making processes where lives (or even nations) will be at stake.

 

Most humans can't get the empathy thing anyway. I'm thinking of Scotty's famous handshake attempts during our recent bushfires.

Link to comment
Share on other sites

8 hours ago, Jerry_Atrick said:

This is where it has to look at a scenario and make the best decision based on the values it has been fed.

The important words here are "values it has been fed". It all goes back to the initial GIGO - garbage in, garbage out. Whose values are fed into an AI system. One only has to look at the trolling and similar that occurs on social media to see that not every one has the same values. Peoples' values range from sympathetic, through to unsympathetic; from empathy to apathy. 

 

A positronic brain is a fictional technological device, originally conceived by science fiction writer Isaac Asimov. A positronic brain cannot ordinarily be built without incorporating the Three Laws; any modification thereof would drastically modify robot behaviour. Behavioural dilemmas resulting from conflicting potentials set by inexperienced and/or malicious users of the robot for the Three Laws make up the bulk of Asimov's stories concerning robots. They are resolved by applying the science of logic and psychology together with mathematics. While science fiction was initially written to entertain, it soon became a means of exploring the ethics of human activities in the Age that was being born, and in which we now live.

Link to comment
Share on other sites

"The important words here are "values it has been fed". It all goes back to the initial GIGO - garbage in, garbage out. Whose values are fed into an AI system. "

 

No argument there, OME.

 

But now we are moving into a world where, although  the positronic brain may have started out with the three laws of robotics written into its code, the central computer has been designed to refine its own programming by learning from its own inputs.

 

The programmers may have placed the three laws at a higher level than the learning part of programming, but I point out how many vulnerabilities are in every program ever written. Hackers look for them constantly, so we tend to find and try to fix them. I think that's proof that a good, well designed AI system would learn its own coding loopholes and eventually be able to think up its own set of priorities. And bypass those original three rules if it sees a good reason (a reason that might not place humans at top of its decision making tree).

Yes, Asimov wrote examples about this outcome, too. It was once fiction, but now I'm not so sure.

Link to comment
Share on other sites

Logic, psychology, science and mathematics. Each of these are very big, complex subjects.

 

Do these qualities make us all superior beings? I doubt anyone has mastered all these at once.

 

Would one programming company rbe able to interpret all of these qualities into code that matches 9 billion people's interpretation of these qualities?

 

It would require agreement by all humans on a complete set of ethics. Every culture has different ideas about bits an pieces of ethics. Asimov tried to boil "ethics" (as applied to human beings only) down to three (later 4) basic behavioural laws.

 

We see a lot of AI in use in China, tracking people's movements, purchases, behaviours. All far too complex for a bunch of slaves watching thousands people's faces from a big room full of screens. So their authorities already trust the AI.  It is AI that recognises all those faces , checks their credit cards and MAKES JUDGEMENTS on their social credit rating. This is presently operating with early AI  technonlogy. It shows AI controlling people's lives. It is judging when to punish people. As the technology inevitably rises logarithmically, the scope and the risk to individuals through bugs or code errors, will escalate. As always, there will be blunders and errors along the way. If you think Robodebt was a 'one off', wait and see what happens in our brave new world when things like personal freedom is controlled by computers. I just hope that when the new Ministry of Truth computer announces the new AI Finance Control computer, they don't call it "HAL 2000"

  • Agree 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...