AI is reaching the point where there is wild speculation, potential regulations, and opinions on if it will destroy humanity. So how does anyone make sense of all of it?
SHOW: 770
CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotw
CHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"
SHOW SPONSORS:
- Code Comments - An original podcast from Red Hat (Season 2)
- Adjusting to new technology, from teams that have been through it
- Find "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and Spotify
- Keep up to data with Enterprise Tech with theCUBE
SHOW NOTES:
-
OpenAI’s former top safety researcher says there’s a ’10 to 20% chance’ that the tech will take over with many or most ‘humans dead’
- The Dawn of the AI Era 2022-2023 (Acquired Podcast)
- An AI Capitalist Primer
- Techno-Optimist Manifesto (a16z)
- US Gov’t Executive Offer on Safe, Secure, Trustworthy AI (Oct 2023)
- Regulating AI by Executive Order (Steven Sinofsky)
- The Philanthropy of Silicon Valley
- “The Trolley Problem”
ARE WE UNDER OR OVER REACTING TO THE AI POSSIBILITIES?
- Oppenheimer said the possibility of destruction was “near zero”
- OpenAI research said it’s 10-20% chance of human destruction
WHAT ARE THE OPEN, REGULATIVE AND STRUCTURAL GUARDRAILS OF AI
- What is good or bad with AI?
- Should societal concerns be considered? By whom?
- Should environmental concerns be considered? By whom?
FEEDBACK?
- Email: show at the cloudcast dot net
- Twitter: @thecloudcastnet
Keep up to date on the latest podcasting tech & news with the folks at Buzzsprout!
Listen on: Apple Podcasts
from The Cloudcast (.NET) https://bit.ly/47Mysoj
via IFTTT
No comments:
Post a Comment