Jump to content

This is scary - how AI is now even writing the articles you read


Recommended Posts

I can see where AI can be used for creating mundane communications such as advertisements and, following them, instruction manuals, but isn't emotion an essential part of both written and musical creation? 

 

I can see that an AI composer could create a military march, or a waltz, but how could it interpret the emotion of, say, the sound of a babbling brook, or a chiming church bell?

  • Agree 1
Link to comment
Share on other sites

Leisure time is such a relative term.. today's leisure will be tomorrow's work...

 

Sometime, it takes a long time for technology to be adopted, and sometimes it is never adopted properly - or to realise its true potential.

 

But, yes, those predictions in the late 60s to early 80s were a little off.. There are two reasons for this. First, these were no decision making automation. Yes, software makes decisions based on highly structured and known inputs to determine what to do or not to do. For example, your ATM would check if you have enough money in your account before it would dispense you some cash. This sort of processing simply speeds up what humans do.. But if you think about those ATMs, even when then first came out, they were a leap forward and did threaten branch banking. It's just that banks weren't prepared to do away with branches because of the cutlture shift.

 

Which leads to the second reason. While management could see the benefit of discrete automation, they couldn't see the "benefit" of a culture shift. This often takes for the next one or two generations to attain management. An example was one software company I worked for were one of the first to release what today is called "Procure to Pay", which is basically, when an item (or service) is received and passes warehouse inspections if required, an accounts payable invoice is automatically raised and included in the appropriate payment run. All taxes are taken care of (as applicable). There was no need for the vendor to raise an invoice or the accounts payable department to receive them, key them in, manually match to the purchase order and ensure all appropriate rules were met; check the receipts to ensure the items were received and inspected, etc. This saved both the vendor time, money and resources. However, even though clients admitted, one after another, that they have never rejected a received/accepted item by warehouse workers, they still didn't want to let go.. Nowadays, it is common place, some 30 years after it was first attempted to be introduced.

 

Another favourite of mine is customer relationship management software. Even back in the late 90's the technology was truly enabling - it provided an awesome amount of disparate information about a client neatly for the call centre operator within seconds. The opportunity was to have highly trained customer service reps who could make most decisions based on the information in front of them, rather than have lemings just following scripts. Because the calls could be dispensed with quicker and with less hierarchy, the idea was less customer service reps better trained would lead to lower costs (and less requirements of people) and better customer outcomes. Sadly, it is still estimated around 90% of implementations seek to simply log calls, still implement archaic procedures and are used to emphasise the scripted way of doing things, but allow it to be done from low cost countries (e.g. India). The result - poor customer service and the costs are mounting because people are no longer staying loyal. In fact, studies are now showing that were CRM software is used as intended, costs are cheaper, and customer loyalty (and therefore revenue) are much better.

 

Whereas systems of old are effectively transactional and use structured and known inputs, AI, on the other had, uses heterogenous inputs that are largely unstructured. For example, AI trading systems will pares news artivles (printed, radio, and TV) as well as what is happening on the markets themselves to come up with a trading strategy. For example, a client of our firm will use our technology to put through more than 300 orders per second (i.e. request to buy or sell stocks) on a single exchange using our technology. They will be using all of these inputs to work out a strategy to bujy and sell shares to take advantage of small movements in the prices. Without AI, they could rely on what is happening in the markets, but they are exposed to events outside the markets, which increases the risk considerably. In fact, AI funds make the best gains, and suffer the lowest level of losses, on average - and that is over a 7 - 10 year span. There is no need for the chalkie (long gonbe), trading pits, brokers, no indeed masses of traders. Trading floors these days are shadows of themselves.

 

In retail banking, I am not sure about what is happening in Aus, but branches here are closing at an alarming rate. While they don't need AI to take deposits or allow wiothdrawals and calculate interest, they use AI for all sorts of things - predominantly user identification and authentication. I can open a bank account without having to go to a branch, by submitting a copy of my docs, which is checked for authenticity, and my webcam and phone. Almost every function is then available online as if I am in a branch. AI is used to recommend insurance and investment products, or debt facilities (loans, leases, etc). I would predict in about 10 - 15 years, in the UK, each bank will have branches in regional cities only, with the Post Office providing walk in services on an agency basis outside of these centres - which they already do.

 

The other difference between AI and previous technology is that AI can "learn". Which means the number of programmers, engineers, quants, etc, can reduce substantially, as well. AI, combined with robotics offers much more realistic disruption to the human worekforce than earlier generations of technology. We are seeing its effects today. The economy and society can absorb the redeployments needed - but for how long?

  • Informative 3
Link to comment
Share on other sites

You are still using the 1968 argument.... But you do put it convincingly......

 

"Look at our brand new toy - hire us toimplement it and it solves all your old problems, gives business more profit, and keeps all the workers happy."

 

 

  • Like 1
Link to comment
Share on other sites

Whenever a radical game changer happens (such as A.I.), some tall poppies tumble and some unexpected players suddenly get megga rich.

 

Once the dust settles and everybody plays by the new rules, it all becomes the normal trusted way of business..... Until the next radical game changer.

 

  • Like 1
  • Agree 1
Link to comment
Share on other sites

A.I. does offer the hope of having it's own 'Plan-Do-Check-Activate' process of continual improvement. But unless it is carried out by one big master server that co-ordinates everything that A.I. learns, won't it end up just like having a population of individual humans each doing their best to make a profit from the financial structure all around them? It might end up bringing bigger stupid chaos then 'Real Intelligence'

  • Like 1
  • Agree 1
Link to comment
Share on other sites

Talking of culture shifts, see how our daily financial transactions are moving mainly to digital means. The cashless society has well and truly arrived. Remember when image.jpeg.60211ea6e2453e63ce34df0cdfc1ea92.jpegwas introduced? Credit only, and a hardcopy invoice sent out each month. Today, those of us who still use plastic to complete transactions are looking like technological old fogies, and those who use banknotes are definitely "last century" (unless you are an unregistered recreational pharmaceuticals supplier).

  • Like 2
  • Agree 1
Link to comment
Share on other sites

The son of friends of mine is involved with AI. He studied it at Uni and is now lecturing there among other things. He was off to London I think to attend a get together of experts, to discuss ways to ensure that AI cannot make decisions harmful to people. That to me shows that even the experts have their doubts about the safety of AI.

Link to comment
Share on other sites

The Three Laws of Robotics, known as Asimov's Laws are a set of rules devised by science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround"  The Three Laws, quoted from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

The Three Laws, and the zeroth, have affected thought on ethics of artificial intelligence as well.

  • Like 1
  • Informative 1
Link to comment
Share on other sites

8 hours ago, nomadpete said:

You are still using the 1968 argument.... But you do put it convincingly......

 

"Look at our brand new toy - hire us toimplement it and it solves all your old problems, gives business more profit, and keeps all the workers happy."

 

 

That is the whole basis of innovation. The difference is the technology in hardware and software algorithms have moved so far ahead since 1968, that disrupting human work is much close than it was, and to be honest, it has started - just hasn't hit critical mass yet.

 

8 hours ago, nomadpete said:

Whenever a radical game changer happens (such as A.I.), some tall poppies tumble and some unexpected players suddenly get megga rich.

 

Once the dust settles and everybody plays by the new rules, it all becomes the normal trusted way of business..... Until the next radical game changer.

 

Agree... and the new rules will be less huyman work.. Which is worrying in itself for the health of individuals, and society - imagine what the toxic can get up to with ore time on their hands and a wealth redistribution system that would be required as a result.

 

The next game changer is quantuim computing, which applies quantum mechanics at the machine level so a single bit (1/8th of a byte) can have multiple concurrent states, rather than 1/0 or on/off (or -5/+5v). This will allow processing power at an almost exponential growth, which will allow the application of advanced AI in minaturised devices that are inconceivable today. I think they are probably about 10 years off getting beyond the theory, and probably 20 before it is commercially available. Could be sooner, though.

 

8 hours ago, nomadpete said:

A.I. does offer the hope of having it's own 'Plan-Do-Check-Activate' process of continual improvement. But unless it is carried out by one big master server that co-ordinates everything that A.I. learns, won't it end up just like having a population of individual humans each doing their best to make a profit from the financial structure all around them? It might end up bringing bigger stupid chaos then 'Real Intelligence'

Like the GPT series, there are already centralised AI frameworks that applications piggy back off. However, there is no need for a single server to master everything. In fact, that can be more dangerous than a decentralised model. The difference between epople and AI is the ability to assimilate the vast amounts of data both freely and not-freely available. For example, tagged based AI requires people to feed it, as does deep machine learning - but the latter only need seeds - the rest it can work out for itself by trawling publicly and privately accessible resources - in the same way a search bot does (which is just, AI). Some will be at different levels of maturity - which is why Google is generally a very good search engine and the others are forever catching up.  In fact, from a commercial perspective, the unique selling point will be the data you carry and the AI you offer - and how it is optimised to your market segment's needs. Remember, also, that AI has to be integrated into robotics to work.

 

40 minutes ago, Yenn said:

The son of friends of mine is involved with AI. He studied it at Uni and is now lecturing there among other things. He was off to London I think to attend a get together of experts, to discuss ways to ensure that AI cannot make decisions harmful to people. That to me shows that even the experts have their doubts about the safety of AI.

Hope he has a good time. I am not sure how one would ensure AI can't make detrimental decisions, unless all compilers are written to identify and prevent them.. which would mean the compilers would have to have AI built into them - no mean feat.. and as they learned new ways that detrimental decisions could be made, they would have to recompile already compiled code.. the world would grtind to a halt. And, of course, nefarious regimes would just write ttheir own compilers, and noblle operating systems, anyway.

 

5 minutes ago, old man emu said:

The Three Laws, and the zeroth, have affected thought on ethics of artificial intelligence as well.

Only for those of ethical pursuasion.

Link to comment
Share on other sites

Just been catching up on this thread; my head hurts!
 

15 hours ago, old man emu said:

The Three Laws of Robotics, known as Asimov's Laws are a set of rules devised by science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround"  The Three Laws, quoted from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

Zeroth Law

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

The Three Laws, and the zeroth, have affected thought on ethics of artificial intelligence as well.

Asimov will surely be more revered in future, when the genius of his insights are recognised.

Haven’t seen any hint of that other great debate (about religion) in this discussion. Perhaps AI will help free humankind from slavery to irrational faiths.

I’d be more inclined to trust an AI machine to interpret these laws than one of Trump’s Supreme Court appointees.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

But what happens when a machine with advanced AI capabilities starts to enquire about the background of the data it is learning from. Will the cultural bias towards spirituality in most cultures cause the machine to move in a particular direction in relation to the idea of one or more gods? Or will the machine be satisfied with the answer the the question, "Who made me?" as XYZ Robitics Pty Ltd. 

Link to comment
Share on other sites

16 hours ago, Jerry_Atrick said:

disrupting human work is much close than it was, and to be honest, it has started - just hasn't hit critical mass yet

On that point I agree. But I add that that is exactly what my instructor was pointing oit in 1968. It has taken a bit longer than he expected. But the cultural change was evident back then. So was the direction. But for all the wonderful promised lifestyle changes, the reality has not matched. And reality never turns out as nice as the promo.

 

The next big shift that affects Joe Bloggs is going to be quantum computing combined with the merge of A.I. and robotics. Might be sooner than you think, Jerry.

 

I still have a problem with 'distributed A.I.'  If there are 500 million A.I. smartphones at the same time as thousands of business A.I. devices, and millions of A.I. domestic devices, and they are all busy seeking out their own evolutionary paths to improve knowledge (databases) and thought processes (kernel code), that IS chaos. And Azimov nailed it when he realised that you can't create bulletproof ethics in such systems.

 

 

 

1 hour ago, Old Koreelah said:

Perhaps AI will help free humankind from slavery to irrational faiths.

A.I. can replace all religions. It's scope will so far exceed what the human brain can hold and process, that whatever A.I. says, will be axiomatic - unquestionable truth. Because there is no way to challenge it. And isn't that the concept of a god?

 

Try turning off your smartphone for a week. I didn't realise how much data it supplied me until I did that. I was scrambling for old paper conversion tables and measurement data! And so far, we only use what will soon be seen as 'primitive' A.I. in our lives. Yet it is already our main source of 'truth'

  • Like 1
Link to comment
Share on other sites

The problem with TRUTH is  the inconvenience of it. Firstly you need to know WHAT it is and  will it ruin your Plans.?. Human's judgement is clouded by evolution. ie How we responded to and dealt with things in antiquity. Nev

  • Agree 1
Link to comment
Share on other sites

2 hours ago, nomadpete said:

And isn't that the concept of a god?

 In the beginning God created ...

Genesis 1:26, "Then God said, “Let us make mankind in our image, in our likeness, ..."

Genesis 1:27, "So God created mankind in his own image, in the image of God he created them; male and female he created them."

 

In the 21st Century, Mankind created God from the dust of the earth.

Link to comment
Share on other sites

3 hours ago, old man emu said:

In the beginning God created ...

Genesis 1:26, "Then God said, “Let us make mankind in our image, in our likeness, ..."

Genesis 1:27, "So God created mankind in his own image, in the image of God he created them; male and female he created them."

I was going to make a comment about these two verses, but I don't want to side track this thread. I'll do it in the proper forum.

  • Thanks 1
Link to comment
Share on other sites

20 hours ago, nomadpete said:

But I add that that is exactly what my instructor was pointing oit in 1968.

I understand. However, I would argue that pre-AI, mass disruption to the human workforce would have been a very long way off, if at all. Transactional and SCADA based automation more resulted in a redeployment of the workforce rather than replacement of it. If we look at transactional automation, it replaced low-level unskilled and semi-skilled work, but required higher skilled work in its place. SCADA and other controls automation probably actually created new employment frontiers as it allowed innovation. Space exploration would be a great example, but complex plant management would be difficult without it. For example, the nuclear control room are taught when the alarms go off, don't touch.

 

In this development of technology, the semi and non-skilled workforce were at risk, however, it created jobs in the skilled workforce area. As an example, at a client site, I demonstrated how a simple interface from the Enterprise Asset Management system to an accounts payable system could remove one full time employee who's sole purpose was to manually take a report of the timesheets site contractors entered and then re-key them into the accounts payable system. In the demo, was the person's job I was eradicating - his salary, I would guess at the time was £8k/year, + oncosts; the custom interface - USD$2K. He approached me after the demo to inform me it was his job I was eradicating, and asked what he is supposed to do.

 

My response was that the company would likely redeploy him - which they did, and I had to train up this new junior system admin on our application. The result - he was much happier in his role, earning more as a junior system admin than as a mid-level admin. We still exchange emails infrequently, and he now runs the entire department. OK.. there are some, if not many, who don't have the faculty or want to retrain, but if there is one thing that is constant - it is change, and we all adapt or die. I have changed my career at least three times..

 

There was an article a while back challenging whether or not the introduction of all these expensive systems did in fact a) result in increased capacity v. cost, and b) net reduction in employees. It was an interesting article, and used the ATO as a case study, where it found, it hadn't up to the time of writing, which was, I guess about 5 years ago). I searched, but Google doesn't bring up the article in deference to new articles on the ATO beefing up its systems. It found the processing cost per return compared to pre-computing days in real terms was higher, and personnel (employees, contractors, and consultants) was also higher. It provided studies of the private sector as well and cited other organisations. There were shortcomings in their analysis, but in general it provided fodder for my then confirmation bias.

 

But, AI is a different beast. Going back to my enterprise asset management days, we implemented systems that tried to identifty predictive maintenance (different to preventitative maintenence in it uses continual maintenance records to try and determine/adjust the most optimal preventative maintenance schedule). Now, bring AI into the equation, and, in theory, the accuracy and optimisation of predictive maintenance will be so much better because of the sheer volume of paramters it can take. Add robotics, and it dispatches the machines to fix the thing - now you no longer need the work crews (won#t happen in Nuclear for some time, I may add). But, also, all the automation in the wearehouse to pick the parts, autonomous vehiocles to transport the items there, etc., and the only thing people will be used for is to audit/inspect what has happened.

 

That is where AI is a game changer compared to traditional technologies.. and accompanied by robotics. The manufacutring line no longer needs everything to be perfectly situated before the robots can work on them; AI alloews them to discern where things are and react accoridngly. They can sort through rubbish to determine what is and isn't recyclable. And AI robots can be dispatched for mainteance on the manufacturing robots when things go wrong, etc.

 

Your instructor was right in 1968 - automation would disrupt - but eventually. I took great pride in bringing so called efficiencies to many organisations, but never displacing an employee - rather re-deploying them. It did mean, however, it did reduce demand for new employees, yes... but when you considered the amount of techhies needed at better rates of pay that would not be needed if we weren't doing it, well, I drew comfort.

 

 

  • Informative 1
Link to comment
Share on other sites

I can remember being told back in the 80s to be prepared to quickly adapt to change because the occurrence of change was going to be the only constant during one's career. That proved to be correct and still does now and into the future.

 

There is, however, that draw of nostalgia as we get older and tire of constant change. People are reviving cottage industry skills as a means of relaxation. Sometimes the appreciation of others for the products of these cottage industries can provide the producers with a  modest living income, but cottage industries are not going to oust large scale industry. And those who revive cottage industries have the ability to pass on those cottage industry skills to the next generation. In that way basic skills will never be lost. 

 

I am reminded of a story by the science fiction writer Isaac Asimov. In his story, set when computers ran the planet (benignly) the computers stopped being able to calculate. This threw a particular organisation into turmoil until the wimpy little nerd who had been kicked around all his life started to produce the answers to complex calculations. When the Big Shots brought him before them to explain how he was doing what he was doing, he pulled out a pencil and paper and showed them how to do basic arithmetic. That story is reflected in the difference between the provision of writing implements for astronauts and cosmonauts. The Yanks spent millions trying to develop a ball-point pent that would work in weightlessness. The Russians provided the cosmonauts with pencils.

  • Like 1
Link to comment
Share on other sites

23 minutes ago, old man emu said:

The Yanks spent millions trying to develop a ball-point pent that would work in weightlessness. The Russians provided the cosmonauts with pencils.

 

Sorry, not trying to derail your point but the story about NASA and pens/pencils turns out to be apocryphal.

 

Fact or Fiction?: NASA Spent Millions to Develop a Pen that Would Write in Space, whereas the Soviet Cosmonauts Used a Pencil

 

 

  

Link to comment
Share on other sites

1 hour ago, Jerry_Atrick said:

Transactional and SCADA based automation more resulted in a redeployment of the workforce rather than replacement of

No. I worked in this area of power generation, also in telcos.

 

Pre computerised (now slightly A.I.) power station control systems required large panels of instruments and many manual controls. And many subject matter experts with intimate knowledge of the machinery, to interpret and control stuff.

 

When I left a couple of years ago, there was a room filled with screens and two bored systems operators, who oversaw a entire power station. Staff numbers decimated, and mostly through natural attrition (early retirement). Very very little redeployment, and that mostly by a minority - usually younger ones motivated by large mortgages.

 

1 hour ago, Jerry_Atrick said:

the semi and non-skilled workforce were at risk, however, it created jobs in the skilled workforce area.

I consider myself one of the lucky few who adjusted to constant change. Been out of work many times due to tech changes. Happily got paid redundancy twice, which helped. But about half a dozen other times 'redeployment' saved me from losing my house. What they call redeployment mostly meant desperately struggling to snag a new job when one's skillset has suddenly become outdated. Accepting a drop in pay. Then expensive, time consuming retraining, and climbing the advancement ladder again.

 

Once I attended a redeployment session presented by a professional employment agency, at my employer's expense, and after all the positive spruiking, I learnt that the best I could hope for was intermittent project work, and periods of unemployment. Then when pressed, they said "Well between contracts, you'll need a holiday." Some of my colleagues got nice consultance work on Belgium, or notso nice work in UAE.

 

My examples only cover 'automation'. Back in 1968 we were promised to expect automation to bring retirement at 50, and the productivity to pay our med, education and pensions for all. The A.I. revolution has only just begun. Interesting times.

 

Summarising - your success story does not represent the majority of skilled workers who are steadily being displaced.

  • Informative 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...