Rocksolid Light

News from da outaworlds

mail  files  register  groups  login

Message-ID:  

BOFH excuse #73: Daemons did it


comp / comp.risks / Risks Digest 33.95

SubjectAuthor
o Risks Digest 33.95RISKS List Owner

1
Subject: Risks Digest 33.95
From: RISKS List Owner
Newsgroups: comp.risks
Organization: PANIX Public Access Internet and UNIX, NYC
Date: Sat, 2 Dec 2023 23:33 UTC
Path: eternal-september.org!news.eternal-september.org!feeder3.eternal-september.org!eternal-september.org!panix!.POSTED.panix3.panix.com!not-for-mail
From: risko@csl.sri.com (RISKS List Owner)
Newsgroups: comp.risks
Subject: Risks Digest 33.95
Date: 2 Dec 2023 23:33:11 -0000
Organization: PANIX Public Access Internet and UNIX, NYC
Lines: 959
Sender: RISKS List Owner <risko@csl.sri.com>
Approved: risks@csl.sri.com
Message-ID: <CMM.0.90.4.1701559790.risko@chiron.csl.sri.com11431>
Injection-Info: reader2.panix.com; posting-host="panix3.panix.com:166.84.1.3";
logging-data="23636"; mail-complaints-to="abuse@panix.com"
To: risko@csl.sri.com
View all headers

RISKS-LIST: Risks-Forum Digest Saturday 2 December 2023 Volume 33 : Issue 95

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/33.95>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>

Contents:
Commercial Flights Are Experiencing 'Unthinkable' GPS Attacks
and Nobody Knows What to Do (Vice)
G7 and EU countries pitch guidelines for AI cybersecurity
(Joseph Bambridge)
U.S. and UK Unveil AI Cyber-Guidelines (Politico via PGN)
Was Argentina the First AI Election? (NYTimes)
As AI-Controlled Killer Drones Become Reality, Nations Debate Limits,
(The New York Times)
Reports that Sports Illustrated used AI-generated stories and fake
authors are disturbing, but not surprising (Poynter)
Is Anything Still True? On the Internet, No One Know
Anymore (WSJ)
ChatGPT x 3 (sundry sources via Lauren Weinstein)
Texas Rejects Science Textbooks Over Climate Change, Evolution Lessons
(WSJ)
A `silly' attack made ChatGPT reveal real phone numbers
and email addresses (Engadget)
Meta/Facebook profiting from sale of counterfeit U.S. stamps
(Mich Kabay)
Chaos in the Cradle of AI (The New Yorker)
Impossibility of Strong watermarks for Generative AI
Intel hardware vulnerability (Daniel Moghimi at Google_
Hallucinating language models (Victor Miller)
USB worm unleashed by Russian state hackers spreads worldwide
(Ars Technica)
AutoZone warns almost 185,000 customers of a data breach
(Engadget)
Okta admits hackers accessed data on all customers during recent breach
(TechCrunch)
USB worm unleashed by Russian state hackers spreads worldwide
(Ars Technica)
Microsoft’s Windows Hello fingerprint authentication has been bypassed
(The Verge)
Thousands of routers and cameras vulnerable to new 0-day attacks
by hostile botnet (Ars Technica)
A Postcard From Driverless San Francisco (Steve Bacher)
Voting machine trouble in Pennsylvania county triggers alarm ahead of 2024
(Politico via Steve Bacher)
Outdated Password Practices are Widespread (Georgia Tech)
THE CTIL FILES #1 (Shellenberger via geoff goodfellow)
Judge rules it's fine for car makers to intercept your text messages
(Henry Baker)
Protecting Critical Infrastructure from Cyber Attacks (RMIT)
Crypto Crashed and Everyone's In Jail. Investors Think It's
Coming Back Anyway. (Vice)
Feds seize Sinbad crypto mixer allegedly used by North Korean e
hackers (TechCrunch)
A lost bitcoin wallet passcode helped uncover a major security flaw
(WashPost)
Ontario's Crypto King still jet-setting to UK, Miami, and soon Australia
despite bankruptcy (CBC)
British Library confirms customer data was stolen by hackers,
with outage expected to last months (TechCrunch)
PSA: Update Chrome browser now to avoid an exploit
already in the wild (The Verge)
WeWork has failed. Like a lot of other tech startups, it left damage in its
wake (CBC)
Re: The AI Pin (Rob Slade)
Re: Social media gets teens hooked while feeding aggression and
impulsivity, and researchers think they know why (C.J.S. Hayward)
Re: Garble in Schneier's AI post (Steve Singer)
Re: Using your iPhone to start your car is about to get a
lot easier (Sam Bull)
Re: Oveview of the iLeakage Attack (Sam Bull)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Mon, 20 Nov 2023 19:00:14 -0500
From: Monty Solomon <monty@roscom.com>
Subject: Commercial Flights Are Experiencing 'Unthinkable' GPS Attacks
and Nobody Knows What to Do (Vice)

New "spoofing" attacks resulting in total navigation failure have been
occurring above the Middle East for months, which is "highly significant"
for airline safety.

https://www.vice.com/en/article/m7bk3v/commercial-flights-are-experiencing-unthinkable-gps-attacks-and-nobody-knows-what-to-do

------------------------------

Date: Mon, 27 Nov 2023 9:10:36 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: G7 and EU countries pitch guidelines for AI cybersecurity
(Joseph Bambridge)

Joseph Bambridge, Politico Europe, 27 Nov 2023

Cybersecurity authorities in 18 major European and Western countries,
including all G7 states, today released joint guidelines on how to
develop artificial intelligence systems in ways that ensure their
cybersecurity.

The United Kingdom, United States, Germany, France, Italy, Australia,
Japan, Israel, Canada, Nigeria, Poland and others backed what they
called the world's first AI cybersecurity guidelines. The initiative
was led by the U.K.'s National Cyber Security Centre and follows
London's AI Safety Summit that took place early November.

The 20-page document sets out practical ways providers of AI systems can
ensure they function as intended, don't reveal sensitive data and aren't
taken offline by attacks.

AI systems face both traditional threats and novel vulnerabilities
like data poisoning and prompt injection attacks, the authorities
said. The guidelines -- which are voluntary -- set standards for how
technologists design, deploy and maintain AI systems with
cybersecurity in mind.

The U.K.'s NCSC will present the guidelines at an event Monday after
noon.

<https://y3r710.r.eu-west-1.awstrack.me/I0/0102018c10220f9c-cd93ae92-527e-4258-a9b4-5c43adb51332-000000/VBwAxQb3zMQOCAxex0irXa9NdgE=349>

------------------------------

Date: Tue, 28 Nov 2023 11:26:30 PST
From: Peter Neumann <neumann@csl.sri.com>
Subject: U.S. and UK Unveil AI Cyber-Guidelines (Politico)

(Joseph Bambridge, Politico, PGN-ed for RISKS)

U.S. and UK UNVEIL AI CYBER GUIDELINES

The UK's National Cyber Security Center and U.S. Cybersecurity and
Infrastructure Security Agency on Monday unveiled what they say are the
world's first AI cyber guidelines, backed by 18 countries including Japan,
Israel, Canada and Germany. It's the latest move on the international stage
to get ahead of the risks posed by AI as companies race to develop more
advanced models, and as systems are increasingly integrated in government
and society.

``Overall I would assess them as some of the early formal guidance
related to the cybersecurity vulnerabilities that derive from both
traditional and unique vulnerabilities,'' the Center for Strategic and
guidelines appeared to be aimed at both traditional cyberthreats and
new ones that come with the continued advancement of AI technologies.

Although the guidelines are voluntary, Allen said they could be made
mandatory for selling to the U.S. federal government for certain types
of risk-averse activities. In the private sector, Allen said
companies buying AI technologies could require vendors to demonstrate
compliance with the guidelines through third-party certification or
other means.

Breaking it down: The guidelines aim to ensure security is a core
requirement of the entire lifecycle of an AI system, and are focused
on four themes: secure design, development, deployment and operation.
Each section has a series of recommendations to mitigate security
risks and safeguard consumer data, such as threat modeling, incident
management processes and releasing AI models responsibly.

Homeland Security Secretary Alejandro Mayorkas said in a statement
that the guidelines are a ``historic agreement that developers must
invest in, protecting customers at each step of a system's design and
development.''International Studies' Gregory Allen told POLITICO. He said the

The guidance is closely aligned with the U.S. National Institute of
Standards and Technology's Secure Software Development Framework
(which outlines steps for software developers to limit vulnerabilities
in their products) and CISA's secure-by-design principles, which was
also released in concert with a dozen other states.

Acknowledgements: The document includes a thank you to a notable list
of leading tech companies for their contributions, including Amazon,
Anthropic, Google, IBM, Microsoft and OpenAI. Also in the mentions
were Georgetown University's Center for Security and Emerging
Technology, RAND and the Center for AI Safety and the program for
Geopolitics, Technology and Governance, both at Stanford.

Aaron Cooper, VP of global policy at tech trade group BSA | The
Software Alliance, said in a statement to MT that the guidelines help
`build a coordinated approach for cybersecurity and artificial
intelligence,'' something that BSA has been calling for in many of its
cyber and AI policy recs.

------------------------------

Date: Mon, 20 Nov 2023 11:40:21 -0500 (EST)
From: ACM TechNews <technews-editor@acm.org>
Subject: Was Argentina the First AI Election? (NYTimes)

Jack Nicas and Luc=C3=8Ca Cholakian Herrera
*The New York Times*, 16 Nov 2023
via ACM TechNews, November 20, 2023

Sergio Massa and Javier Milei widely used artificial intelligence (AI) to
create images and videos to promote themselves and attack each other prior
to Sunday's presidential election in Argentina, won by Milei. AI made
candidates say things they did not, put them in famous movies, and created
campaign posters. Much of the content was clearly fake, but a few creations
strayed into the territory of disinformation. Researchers have long worried
about the impact of AI on elections, but those fears were largely
speculative because the technology to produce deepfakes was too expensive
and unsophisticated. "Now we've seen this absolute explosion of incredibly
accessible and increasingly powerful democratized tool sets, and that
calculation has radically changed," said Henry Ajder, an expert who has
advised governments on AI-generated content.


Click here to read the complete article
1

rocksolid light 0.9.8
clearnet tor