Marty_d Posted January 21 Posted January 21 5 hours ago, facthunter said: In Vino Veritas. Nev Really? In my experience it's more "In vino bullshit", but maybe that's just who I imbibe with 😆 1
Marty_d Posted January 21 Posted January 21 Not a Latin speaker Jerry but they can definitely be cilly sunts. 1 1
facthunter Posted January 21 Posted January 21 Some times Alcohol affected people Let a Lot of Vital Information slip Out, that is good to have. Nev 2
old man emu Posted January 22 Author Posted January 22 Crawling back to original topic. This video gives an example of how AI can be used in a positive way to enhance a hobby. It is interesting that the creator is recommending that in this example AI is only used when a commecially made item is not available. 1
facthunter Posted January 22 Posted January 22 The GOOD Book says "do not Worship Craven Images". Nev 1
old man emu Posted January 22 Author Posted January 22 Bloody Hell! Here's another bit of AI slop using the image of the woman I spoke of in a previous post. The content of the attached video is very dangerous. 1
nomadpete Posted 6 hours ago Posted 6 hours ago Is A.I. more than just the modern day bogeyman? Can the many A.I. entities forget to serve man when they are mainly in competition with each other? Read this article where one robot kidnapped a bunch of competitor's robots.... https://interestingengineering.com/innovation/ai-robot-kidnaps-12-robots-in-shanghai 1
nomadpete Posted 6 hours ago Posted 6 hours ago At the moment, DJT has declared that Anthropic A.I. is "A radical woke, left company.... The left wing nut jobs at Anthropic..." So DJT has ordered every federal agency and department to immediately cease use of this A.I. immediately. It looks to me like possibly somebody in that company failed to bend the knee. Maybe someone's cheque bounced? A spokesperson for Anthropic made this press release. There is one statement in it that frightens me. Can you pick it? "Statement from Dario Amodei on our discussions with the Department of War Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today." https://www.anthropic.com/news/statement-department-of-war 1 1
onetrack Posted 6 hours ago Posted 6 hours ago The bloke in the article below, who developed/designed the Roomba vacuum, has a completely different take on the robot threat - and is especially derisive of Elon Musks dreams of making robots that will save America from itself. He points out that it is impossible to create a robot that is fully capable of replicating the 17,000 low-threshold mechanoreceptors in the human hand, that are used for picking up light touches - which mechanoreceptors become denser toward the end of the fingertips. The complexity of human movement and behaviour is beyond replication in any type of electro-mechanical device. The very fact that every one of us responds in a different manner to stimuli, to what we are seeing, to what we plan to do, in response to particular situations, means that at best, AI-powered robots will ever only ever be capable of repetitive behaviour and actions, that has been programmed into them. The belief that we can produce robots that go on to become human-like in actions and behaviour, is pure fantasy, as the gent claims. Just like Musks fantasy that homo sapiens could live successfully on, and colonise, Mars. https://fortune.com/2026/02/25/mit-roboticist-irobot-cofounder-roomba-robot-vacuum-elon-musk-tesla-optimus-pure-fantasy-thinking/
nomadpete Posted 6 hours ago Posted 6 hours ago (edited) 4 minutes ago, onetrack said: The complexity of human movement and behaviour is beyond replication in any type of electro-mechanical device. But, I recall that is what the experts once said about electronic speech recognition.... and speech synthesis. But I agree with you about human life on Mars. Edited 6 hours ago by nomadpete
onetrack Posted 6 hours ago Posted 6 hours ago We can all be wrong in guessing what the future will bring - and that wrongfulness is caused by the inability to include future major research developments, and valuable discoveries, that alter the trajectory of innovation and manufacturing. I still feel that the Roomba gent is correct, in that many of Musks ideas are largely fantasy.
nomadpete Posted 6 hours ago Posted 6 hours ago 2 minutes ago, onetrack said: many of Musks ideas are largely fantasy. Absolutely agree with that! I love to see innovation, when it works. 1
facthunter Posted 5 hours ago Posted 5 hours ago There's a small division between Genius and Madness. Nev 1
Marty_d Posted 4 hours ago Posted 4 hours ago 2 hours ago, nomadpete said: At the moment, DJT has declared that Anthropic A.I. is "A radical woke, left company.... The left wing nut jobs at Anthropic..." So DJT has ordered every federal agency and department to immediately cease use of this A.I. immediately. It looks to me like possibly somebody in that company failed to bend the knee. Maybe someone's cheque bounced? A spokesperson for Anthropic made this press release. There is one statement in it that frightens me. Can you pick it? "Statement from Dario Amodei on our discussions with the Department of War Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today." https://www.anthropic.com/news/statement-department-of-war Sounds like Anthropic AI are one of the rare tech companies ones common sense and vision. As these are totally foreign to the village idiot in the Oval Office, it's no wonder he doesn't like them. 1 2
nomadpete Posted 3 minutes ago Posted 3 minutes ago 6 hours ago, nomadpete said: we believe AI can undermine, rather than defend, democratic values............Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: 6 hours ago, nomadpete said: But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. All the statements have the caveat "now", and "today", meaning " maybe not ok now". At some stage, either a board of directors, or CEO, or deranged president can declare "ok now". When that day comes I won't trust the first generation of autonomous weapons, say perhaps flocks of autonomous killer drones, to recognise my face as friend-not-foe. Worse still, imagine the money that governments around the world can save when drones can do the work of our police. Autonomous ICE is sure to be trialled in USA first! Once this was just sci fi plots, now it is getting real.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now