Jump to content

Recommended Posts

Posted
5 hours ago, facthunter said:

In Vino Veritas.  Nev

Really? In my experience it's more "In vino bullshit", but maybe that's just who I imbibe with 😆

  • Like 1
Posted

Crawling back to original topic.

 

This video gives an example of how AI can be used in a positive way to enhance a hobby. It is interesting that the creator is recommending that in this example AI is only used when a commecially made item is not available.

 

 

  • Informative 1
Posted

Bloody Hell! Here's another bit of AI slop using the image of the woman I spoke of in a previous post. The content of the attached video is very dangerous.

 

 

  • Informative 1
  • 1 month later...
Posted

At the moment, DJT has declared that Anthropic A.I. is 

 

"A radical woke, left company....

The left wing nut jobs at Anthropic..."

 

So DJT has ordered every federal agency and department to immediately cease use of this A.I. immediately.

It looks to me like possibly somebody in that company failed to bend the knee. Maybe someone's cheque bounced?

 

A spokesperson for Anthropic made this press release. There is one statement in it that frightens me. Can you pick it?

 

"Statement from Dario Amodei on our discussions with the Department of War


Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
 

  • Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

 

  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today."
  • Like 1
  • Informative 1
Posted

The bloke in the article below, who developed/designed the Roomba vacuum, has a completely different take on the robot threat - and is especially derisive of Elon Musks dreams of making robots that will save America from itself.

He points out that it is impossible to create a robot that is fully capable of replicating the 17,000 low-threshold mechanoreceptors in the human hand, that are used for picking up light touches - which mechanoreceptors become denser toward the end of the fingertips. The complexity of human movement and behaviour is beyond replication in any type of electro-mechanical device.

 

The very fact that every one of us responds in a different manner to stimuli, to what we are seeing, to what we plan to do, in response to particular situations, means that at best, AI-powered robots will ever only ever be capable of repetitive behaviour and actions, that has been programmed into them.

The belief that we can produce robots that go on to become human-like in actions and behaviour, is pure fantasy, as the gent claims. Just like Musks fantasy that homo sapiens could live successfully on, and colonise, Mars.

 

https://fortune.com/2026/02/25/mit-roboticist-irobot-cofounder-roomba-robot-vacuum-elon-musk-tesla-optimus-pure-fantasy-thinking/

Posted (edited)
4 minutes ago, onetrack said:

The complexity of human movement and behaviour is beyond replication in any type of electro-mechanical device.

But, I recall that is what the experts once said about electronic speech recognition.... and speech synthesis.

 

But I agree with you about human life on Mars.

 

 

Edited by nomadpete
Posted

We can all be wrong in guessing what the future will bring - and that wrongfulness is caused by the inability to include future major research developments, and valuable discoveries, that alter the trajectory of innovation and manufacturing. I still feel that the Roomba gent is correct, in that many of Musks ideas are largely fantasy. 

Posted
2 minutes ago, onetrack said:

many of Musks ideas are largely fantasy. 

Absolutely agree with that!

 

I love to see innovation, when it works.

  • Like 1
Posted
2 hours ago, nomadpete said:

At the moment, DJT has declared that Anthropic A.I. is 

 

"A radical woke, left company....

The left wing nut jobs at Anthropic..."

 

So DJT has ordered every federal agency and department to immediately cease use of this A.I. immediately.

It looks to me like possibly somebody in that company failed to bend the knee. Maybe someone's cheque bounced?

 

A spokesperson for Anthropic made this press release. There is one statement in it that frightens me. Can you pick it?

 

"Statement from Dario Amodei on our discussions with the Department of War


Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
 

  • Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.

 

  • Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today."

Sounds like Anthropic AI are one of the rare tech companies ones common sense and vision.

As these are totally foreign to the village idiot in the Oval Office, it's no wonder he doesn't like them.

  • Like 1
  • Agree 2
Posted
6 hours ago, nomadpete said:

we believe AI can undermine, rather than defend, democratic values............Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

 

6 hours ago, nomadpete said:

But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

All the statements have the caveat "now", and "today", meaning " maybe not ok now".

At some stage, either a board of directors, or CEO, or deranged president can declare "ok now".

 

When that day comes I won't trust the first generation of autonomous weapons, say perhaps flocks of autonomous killer drones, to recognise my face as friend-not-foe.

 

Worse still, imagine the money that governments around the world can save when drones can do the work of our police. Autonomous ICE is sure to be trialled in USA first!

 

Once this was just sci fi plots, now it is getting real.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...