Mack DeGeurin | Popular Science https://www.popsci.com/authors/mack-degeurin/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 11 Jan 2024 19:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Mack DeGeurin | Popular Science https://www.popsci.com/authors/mack-degeurin/ 32 32 Biden’s $623 million charging initiative faces struggles to get EVs over the finish line https://www.popsci.com/technology/biden-ev-charging-funding/ Thu, 11 Jan 2024 19:00:00 +0000 https://www.popsci.com/?p=598322
Newly funded projects should lead to the construction of an estimated 7,500 EV charging ports, with many located in lower income and rural areas where charging infrastructure is still spotty.
Newly funded projects should lead to the construction of an estimated 7,500 EV charging ports, with many located in lower income and rural areas where charging infrastructure is still spotty. DepositPhotos

Stubbornly high EV prices, cheaper gas, and production setbacks complicate the U.S.’ embrace of electric vehicles.

The post Biden’s $623 million charging initiative faces struggles to get EVs over the finish line appeared first on Popular Science.

]]>
Newly funded projects should lead to the construction of an estimated 7,500 EV charging ports, with many located in lower income and rural areas where charging infrastructure is still spotty.
Newly funded projects should lead to the construction of an estimated 7,500 EV charging ports, with many located in lower income and rural areas where charging infrastructure is still spotty. DepositPhotos

A cross country US road trip in an electric vehicle might start to sound more appealing thanks to a fresh $623 million round of investment in EV charging networks from the Biden Administration. The new funds will inch the US towards Biden’s ultimate goal of 500,000 EV chargers nationwide by 2030 and help put to rest some riders’ fears of running out of juice mid journey. But infrastructure alone may not be enough to counteract slumping EV sales in recent months. Persistently high prices and drops in gas prices have left most Americans sitting on the fence when it comes to considering a new EV.  

The Biden Administration’s Department of Transportation announced the new funding on Thursday, which will come by way of grants sent to support 47 EV charging and alternative-fueling projects spread across 22 states and Puerto Rico. Those projects should lead to the construction of an estimated 7,500 EV charging ports, with many located in lower income and rural areas where charging infrastructure is still spotty. The latest round of EV funding stems from the 2021 Bipartisan Infrastructure law, which carved out $7.5 billion in total funds for charging infrastructure

“The public placed great trust in DOT, and we are honoring that trust by making improvements to transportation that get people and goods to where they need to be more safely, affordably, and sustainably while creating good-paying jobs,” DOT Secretary Pete Buttigieg said in a statement

Continuous investments in charging infrastructure are crucial to addressing range anxiety, one of the top barriers keeping drivers from switching over to electric vehicles. To that end, the administration claims publicly available EV charging ports have increased nearly 70% nationwide since Biden took office in 2021. That adds up to 161,562 total ports as of late last year, which works out to around a third of the administration’s goal of half a million chargers by the end of the decade

Additional government-funded charging ports can have less obvious psychological effects as well. Less than half (47%)  of US adults surveyed by Pew Research last year said they were confident the US government could build out the infrastructure needed to properly power electric vehicles nationwide. But those who did think the government was up to the task were also far more likely to say they would consider an electric vehicle next time they buy a car. Proper infrastructure support from the government, in other words, can make EVs seem more attractive to potential buyers. 

High up-front costs and cheaper gas present roadblocks 

But easy access to charging ports isn’t the only issue keeping EVs from flooding US highways. Despite years of technological innovation and government subsidies, most electric vehicles are simply too expensive for average buyers. Tesla, by far the largest seller of EVs in the US, made a dent in the average EV cost when it slashed its own prices, but consumers still lack much variety in terms of new electric vehicles under $50,000. A recent survey of global respondents by S&P Mobility listed affordability as the top concern slowing EV demand. Nearly half (48%) of those respondents said EV prices were simply too high. 

“Pricing is still very much the biggest barrier to electric vehicles,” S&P Mobility Senior Technical Research Analyst Yanina Mills said in the report. 

Slowing EV sales could, ironically, partly be explained by cheaper gas prices. EVs experienced a blockbuster year of adoption in 2022 when gas prices were soaring to upwards of $5 per gallon in certain parts of the US. Those inflated gas prices made switching over to an electric vehicle, even one slightly more expensive than an internal combustion alternative, more attractive. But prices fell back down to around $3 per gallon nationally last year, which some experts argue may have turned off some would-be EV buyers who were previously on the fence. 

Making matters worse, certain carmakers like Ford and Audi have also either scaled back production targets or delayed rollout of certain EV products citing the recent market trends. AutoPacific President and Chief Analyst Ed Kim recently told ABC News these factors and consumers’ recent attitudes towards EV’s means electric vehicles sales could top out around 1.5 million units by the end of 2024, a more reserved estimate than earlier, more optimistic predictions. 

“We’re not seeing the level of frenzied activity we saw earlier,” Kim told ABC. “There’s a slight tapering of demand and partially a market correction.” 

None of that necessarily means EVs are down for the count. Asking prices for less luxury focused EVs models are likely to continue dropping as carmakers ramp up manufacturing. Ford, the leading automaker by volume in the US, says it plans to produce 600,000 EV units annually by sometime next year. Other automakers have similar EV production ambitions. Cheaper upfront costs could similarly make slight variations in gas affordability less of a make or break consideration for drivers thinking about making a switch to EVs. 

 “The rate of adoption has tailed off a little bit but it’s still growing,” Kim added. “This is not a catastrophe for EVs.” 

EV charging availability alone won’t suddenly shift the vast majority of US drivers away from internal combustion engines, but a lack of that available will undoubtedly make that shift much harder. Instead, rapid EV adoption may likely rely on a careful combination of an expanded charging network, lowered upfront cost, and continuing shifts in overall demand.

The post Biden’s $623 million charging initiative faces struggles to get EVs over the finish line appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube https://www.popsci.com/technology/youtube-free-money-deepfakes/ Wed, 10 Jan 2024 20:00:00 +0000 https://www.popsci.com/?p=598195
AI photo
YouTube

Steve Harvey, Taylor Swift, and other famous people's sloppy deepfakes are being used in sketchy 'medical card' YouTube videos.

The post Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube appeared first on Popular Science.

]]>
AI photo
YouTube

Online scammers are using AI voice cloning technology to make it appear as if celebrities like Steve Harvey and Taylor Swift are encouraging fans to fall for medical benefits-related scams on YouTube. 404 Media first reported on the trend this week. These are just some of the latest examples of scammers harnessing increasingly accessible generative AI tools to target often economically impoverished communities and impersonate famous people for quick financial gain

404 Media was contacted by a tipster who pointed the publication towards more than 1,600 videos on YouTube where deepfaked celebrity voices work as well as non-celebrities to push the scams. Those videos, many of which remain active at time of writing, reportedly amassed 195 million views. The videos appear to violate several of YouTube’s policies, particularly those around misrepresentation and spam and deceptive practices. YouTube did not immediately respond to PopSci’s request for comment.  

How does the scam work?

The scammers try to trick viewers by using chopped up clips of celebrities and with voiceovers created with AI tools mimicking the celebrities’ own voices. Steve Harvey, Oprah, Taylor Swift, podcaster Joe Rogan, and comedian Kevin Hart all have deepfake versions of their voices appearing to promote the scam. Some of the videos don’t use celebrities deepfakes at all but instead appear to use a recurring cast of real humans pitching different variations of a similar story. The videos are often posted by YouTube accounts with misleading names like “USReliefGuide,” “ReliefConnection” and “Health Market Navigators.” 

“I’ve been telling you guys for months to claim this $6,400,” a deepfake clones attempting to impersonate Family Feud host Steve Harvey says. “Anyone can get this even if you don’t have a job!” That video alone, which was still on YouTube at time of writing, had racked up over 18 million views. 

Though the exact wording of the scams vary by video, they generally follow a basic template. First, the deepfaked celebrity or actor addresses the audience alerting them to a $6,400 end-of-the-year holiday stimulus check provided by the US government delivered via a “health spending card.” The celebrity voice then says anyone can apply for the stimulus so long as they are not already enrolled in Medicare or Medicaid. Viewers are then usually instructed to click a link to apply for the benefits. Like many effective scams, the video also introduces a sense of urgency by trying to convince viewers the bogus deal won’t last long. 

In reality, victims who click through to those links are often redirected to URLs with names like “secretsavingsusa.com” which are not actually affiliated with the US government. Reporters at PolitiFact called a signup number listed on one of those sites and spoke with an “unidentified agent” who asked them for their income, tax filing status, and birth date; all sensitive personal data that could potentially be used to engage in identity fraud. In some cases, the scammers reportedly ask for credit card numbers as well. The scam appears to use confusion over real government health tax credits as a hook to reel in victims. 

Numerous government programs and subsidies do exist to assist people in need, but generic claims offering “free money” from the US government are generally a red flag. Lowering costs associated with generative AI technology capable of creating somewhat convincing mimics of celebrities’ voices can make these scams even more convincing. The Federal Trade Commission (FTC) warned of this possibility in a blog post last year where it cited easy examples of fraudsters using deepfakes and voice clones to engage in extortion and financial fraud, among other illegal activities. A recent survey conducted by PLOS One last year found deepfake audio can already fool human listeners nearly 25% of the time

The FTC declined to comment on this recent string of celebrity deepfake scams. 

Affordable, easy to use AI tech has sparked a rise in celebrity deepfake scam

This isn’t the first case of deepfake celebrity scams, and it almost certainly won’t be the last. Hollywood legend Tom Hanks recently apologized to his fans on Instagram after a deepfake clone of himself was spotted promoting a dental plan scam. Not long after that, CBS anchor Gayle King said scammers were using similar deepfake methods to make it seem like she was endorsing a weight-loss product. More recently, scammers reportedly combined a AI clone of pop star Taylor Swift’s voice alongside real images of her using Le Creuset cookware to try and convince viewers to sign up for a kitchenware giveaway. Fans never received the shiny pots and pans. 

Lawmakers are scrambling to draft new laws or clarify existing legislation to try and address the growing issues. Several proposed bills like the Deepfakes Accountability Act and the No Fakes Act would give individuals more power to control digital representations for their likeness. Just this week, a bipartisan group of five House lawmakers introduced the No AI FRAUD Act which attempts to lay out a federal framework to protect individuals rights to their digital likeness, with an emphasis on artists and performers. Still, it’s unclear how likely those are to pass amid a flurry of new, quickly devised AI legislation entering Congress

Update 01/11/23 8:49am: A YouTube spokesperson got back to PopSci with this statement: “We are constantly working to enhance our enforcement systems in order to stay ahead of the latest trends and scam tactics, and ensure that we can respond to emerging threats quickly. We are reviewing the videos and ads shared with us and have already removed several for violating our policies and taken appropriate action against the associated accounts.”

The post Beware the AI celebrity clones peddling bogus ‘free money’ on YouTube appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Waymo plans to put autonomous taxis on freeways ‘in the coming weeks’ https://www.popsci.com/technology/waymo-autonomous-taxis-freeway/ Tue, 09 Jan 2024 20:00:00 +0000 https://www.popsci.com/?p=597976
Waymo employees in Phoenix, Arizona will begin testing autonomous rides on freeways first.
Waymo employees in Phoenix, Arizona will begin testing autonomous rides on freeways first. DepositPhotos

The company says it will take a 'phased' approach.

The post Waymo plans to put autonomous taxis on freeways ‘in the coming weeks’ appeared first on Popular Science.

]]>
Waymo employees in Phoenix, Arizona will begin testing autonomous rides on freeways first.
Waymo employees in Phoenix, Arizona will begin testing autonomous rides on freeways first. DepositPhotos

Alphabet-owned Waymo says it’s ready to begin offering autonomous, “rider-only” trips on freeways in Phoenix, Arizona nearly 15 years after its founding. Waymo will take a multi-phased approach to freeway testing by initially restricting rides to employees as passengers before eventually opening the service up to customers using its Waymo One ride hailing app. That relatively methodical rollout follows months of trouble for the leading autonomous vehicle (AV) competitor Cruise, who was forced to freeze all operations in California last year following a string of safety concerns. 

Waymo, which already offers publicly available rides in parts of Phoenix, San Francisco, and Los Angeles, explained its plans for new freeway testing in a recent blog post. The company will use what it calls a “phased approach” where it will first offer “rider only” freeway commutes to Waymo employees in Phoenix. Employees will provide feedback on both the service and rider experience which Waymo says it will analyze before expanding rides to the wider public. Waymo did not provide any hard dates on when that expansion would occur, opting instead to say it would operate in a “step-by-step manner.” A Waymo spokesperson told PopSci that employees would begin taking these trips on freeways in Phoenix “in the coming weeks.”

“Before expanding, we ensure we have a comprehensive understanding of the environment we plan to operate and our system’s capabilities,” Waymo wrote in its blog post. “Waymo’s years of experience driving cars and trucks on freeways taught us to navigate everyday scenarios autonomously and inform our approach to responding to rare events safely.” 

The company says its gradual expansion to freeway rides could drastically cut down on some commute times where AVs would previously look for alternative, non-highway routes. Those brisker ride times could help address complaints from some critics who say AV rides can be frustratingly time consuming.

Waymo released this video along with its blog post showing its vehicles approaching 65 miles per hour operating on a freeway as well as an image showing time saved when an AV used a freeway route. 

Waymo’s acceleration onto freeways comes just months after GM-backed Cruise, one of the top players in the AV space, was forced to freeze operations in California. In October, multiple vehicles from Cruise’s fleet of driverless Chevrolet Bolt’s were reportedly responsible for causing lengthy, frustrating traffic jams. Around that same time, another Cruise vehicle reportedly ran over a woman and dragged her after a hit and run driver collided with the pedestrian and flung her into the AV’s path. Another Cruise vehicle operating in San Francisco drove into wet cement. Those incidents and growing pushback ultimately ended with the California Department of Motor Vehicles suspending Cruise’s testing permits. Cruise froze all US driverless operations and CEO Kyle Vogt resigned. Regulators forced Cruise off the road before it could begin offering rides on freeways.

Over its years of development, Waymo has tried to distinguish itself from other competitors in the AV space by emphasizing its claimed commitment to safety over the Silicon Valley mantra of moving fast and breaking things. Last year, Waymo released a report laying out the “credible case for safety” where it explains the steps it takes to determine whether or not an AV system is safe enough to be deployed on a public road without a human driver. 

But freeway driving enters Waymo into new, potentially riskier territory. Unlike local city street driving, mistakes on freeways are more likely to run the risk of serious injury or death. And despite Waymo’s assurances that driverless cars are safer overall than humans, many everyday US drivers still aren’t convinced. 38% of US adults polled by YouGov last year said they feared widespread use of driverless cars on roads would increase the number of people killed in traffic accidents. That’s more than double (17%) the amount who believed driverless cars would reduce crashes.

General public queasiness around AVs makes commitments to safety and transparency all the more crucial. 63% of US adults surveyed by Pew Research in 2022 said they would not want to ride in a driverless vehicle, with another 45% saying they wouldn’t feel comfortable sharing a road with one. Almost everyone in that survey (87%) agreed driverless vehicles should have higher testing standards than regular vehicles.

The post Waymo plans to put autonomous taxis on freeways ‘in the coming weeks’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cookies are finally dying. But what comes next? https://www.popsci.com/technology/cookies-google-dead/ Thu, 04 Jan 2024 18:41:01 +0000 https://www.popsci.com/?p=597451
Browsing internet with tracking cookies
The shift could leave the internet devoid of possibly the most disarmingly cute name possible for a pervasive surveillance tool. . DepositPhotos

What you need to know about Google's plan to replace the internet's most pervasive tracker.

The post Cookies are finally dying. But what comes next? appeared first on Popular Science.

]]>
Browsing internet with tracking cookies
The shift could leave the internet devoid of possibly the most disarmingly cute name possible for a pervasive surveillance tool. . DepositPhotos

If you’ve ever wondered why the ad you saw for sunglasses on your phone suddenly appears again on your laptop, third-party cookies are likely the culprit. Now, after four years of false-starts and backpedaling, Google is finally making good on its promise to phase out pesky third-party cookies. Starting this week, some 30 million people, or around 1% of global Chrome browser users, will have the notoriously persistent trackers turned off by default. That could adversely affect advertisers’ ability to collect sensitive information about those users and to serve them ads for products that seem to ravenously follow them from site to site. Google’s eventual cookie phase-out could mark one of the single greatest disruptions to the online economy in memory. 

Google’s limited cookies phase-out, which it’s calling a “Tracking Protection” test, is the first step in a massive plan to phase out the trackers for all Chrome users by the second half of 2024. The search giant wants to replace cookies, long a major point of concern for privacy advocates due to their invasives, with a series of more privacy preserving tools within its “Privacy Sandbox.” Google has held off on emptying the cookie jar for years due in large part to concerns for marketers and advertisers who feared a sudden switch away from the 30-year-old industry standard could gut their profitability. Ready or not, Google is moving forward. 

“With the Privacy Sandbox, we’re taking a responsible approach to phasing out third-party cookies in Chrome,” Google’s  VP of Privacy Sandbox Anthony Chavez said in a blog post

What are cookies anyway? 

Cookies, which are small snippets of text sent to Chrome or other browsers from websites you’ve visited, are the primary trackers underpinning much of the modern internet. Every time you load a website, it will check to see if it’s previously left a cookie with you. 

These trackers can help users stay logged into a site or help a site remember what users leave in their shopping carts. But other, more personal details like your phone number and email address may also be stored in cookies, which can essentially function like unique identifiers following you as you surf the web. 

The 1% of Chrome users selected for Google’s “Tracking Protection” should receive a notification when they log onto Chrome with the title “browse with more privacy.” Users will also see an eyeball logo tucked away in their URL search bar to signify that the new tracking protections are on. If a site repeatedly fails to load because it can’t work without the banned cookies, users may be prompted with an option to temporarily re-enable the trackers. Some of this, Google admits, is still a work in progress.

“As we work to make the web more private, we’ll provide businesses with tools to succeed online so that high quality content remains freely accessible,” Chavez added.

Big Tech’s clash over cookies

Privacy advocates have long criticized third-party cookies due the amount of highly specific personalized data they can include. Large tech firms like Facebook, and Google itself, have faced pushback for letting advertisers direct ads to users who’ve expressed racist sentiments. That coincided with a growing public uneasiness over the types and amount of data governments and private companies are able to siphon up. To that point, a whopping 81% of US adults surveyed by Pew Research this year said they were concerned about how companies use data they collect about them. 

Some browsers, like Apple’s Safari and Firefox, already moved to block third-party trackers by default years ago. Apple went a step further in 2022 with the release of its App Tracking Transparency feature, which prompts iOS users with a notification when an app attempts to track their activity. That tool alone, which is part of a larger societal shift away from cookies, reportedly cost Facebook around $10 billion in lost advertisement sales in 2022. 

Google’s ‘Privacy Sandbox:’ Privacy preserving or tracking with another name?

When cookies are finally eliminated for all Chrome users by the end of 2024, they will be replaced by an initiative Google calls its “Privacy Sandbox.” In a nutshell, Google says the new initiative will use a variety of application programming interfaces (APIs) that send anonymized signals stored in a user’s Chrome browser to send information to advertisers. The sandbox aims to reduce cross-app tracking while still allowing ads to support free access to online services. 

One of the more important of those APIs, which Google calls “Ad Topics” works by placing Chrome users into certain categories based off of all the websites they’ve viewed. Advertisers, and even Google itself, won’t be able to see any specific user’s exact browsing history of personal identifiers. Instead, they will know a certain user is interested in a specific topic. Those topics include categories with names like “Fan Fiction,” “Early Childhood Education,” and “Parenting.” In theory, this new framework should still give marketing firms access to valuable user data necessary to generate effective targeted ads while bolstering personal privacy protections. 

“The most significant item in the Privacy Sandbox is Google’s proposal to move all user data into the browser where it will be stored and processed,” Permutive Marketing Director Amit Kotecha said in a previous interview with DigiDay. “This means that data stays on the user’s device and is privacy compliant. This is now table stakes and the gold standard for privacy.”

Naturally, many marketers aren’t thrilled about losing one of their most valuable pieces of online tracking technology. US broadcasters alone, according to a recent report from the National Association of Broadcasters (NAB), estimate they may lose $2.1 billion annually as a result of the change. Others wished Google had provided a longer transition period. 

“The timing remains poor,” IAB Tech Lab CEO Anthony Katsur said in a recent interview with The Wall Street Journal. “Launching it during the industry’s greatest revenue-generating part of the year is just a terrible decision.” 

A Google spokesperson told PopSci they were confident companies could effectively adapt to the changes. 

“We are confident the industry can make the transition in 2024 based on all the tremendous progress we’ve seen from leading companies, who have indicated publicly they’ve either started testing or plan to do so in January,” the spokesperson said in an email.

On the other side of the coin, some consumer privacy advocates who’ve long called for an end to cookies worry Google’s replacement still falls short and ultimately amounts to a similar form of online tracking with a different name. 

“Google referring to any of this as ‘privacy’ is deceiving,” Electronic Frontier Foundation Security and Privacy Activist Thorin Klosowski wrote in a recent blog post. “Even if it’s better than third-party cookies, the Privacy Sandbox is still tracking, it’s just done by one company instead of dozens.” 

Klosowski went on to say tech firms like Google should work towards creating a world completely free of behavioral advertisements. 

How will browsing the web change without cookies? 

Google’s decision to phase out cookies essentially rewrites the rules for advertising on the internet and may amount to one of the single greatest disruptions to the online economy in recent memory. It also won’t really mean all that much for the vast majority of everyday users. If the switch away from cookies works as intended, Chrome users can continue browsing the web in much the same way as they did before, albeit with an underlying layer of stronger privacy. The bulk of the noticeable changes here will fall on developers, not users.

Cookies aren’t really being purged entirely either. First-party cookies–the type that help you stay logged into certain websites–shouldn’t go away as a result of the changes. Still, the elimination of third-party cookies does amount to a tectonic shift in the way the internet works which means some sites are likely to break or experience issues during the transition. Maybe more importantly, the shift could leave the internet devoid of possibly the most disarmingly cute name possible for a pervasive surveillance tool. 

The post Cookies are finally dying. But what comes next? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Disney finally relinquishes ‘Steamboat’ Mickey Mouse to public domain after testy 95 years https://www.popsci.com/technology/disney-mickey-public-domain/ Tue, 02 Jan 2024 18:30:00 +0000 https://www.popsci.com/?p=597073
Steamboat Mickey Mouse public domain cartoon
Mickey Mouse steers himself into the public domain. Creative Commons/Wikipedia

Here's what you need to know.

The post Disney finally relinquishes ‘Steamboat’ Mickey Mouse to public domain after testy 95 years appeared first on Popular Science.

]]>
Steamboat Mickey Mouse public domain cartoon
Mickey Mouse steers himself into the public domain. Creative Commons/Wikipedia

A version of Disney’s iconic Mickey Mouse–once a leading villain in the US copyright debate–has officially entered the public domain after 95 long years. Moving forward, anyone can reuse or reference the black and white Mickey and Minnie Mouse that appeared in 1928 films Steamboat Willie and Plane Crazy without fears of legal reprisals from Disney. The highly awaited expiration date marks an important moment for public domain advocates, who’ve long associated the goofy mouse with corporate efforts to extend the shelf-life of copyrighted works. 

What is public domain and why is Mickey entering it?

Copyright laws are intended to protect the intellectual property (IP) created by individuals or businesses for a certain amount of time. These protections cover a wide variety of IP, from movies and music to books and creative characters like Mickey and are critical to ensuring creators can profit off their work. In the US, copyright protections generally expire 70 years after an author’s death or 95 years after publication. 

Once expired, those works enter the public domain, where they can largely be used freely without requiring permission from the original author. Public domain advocates say reasonable copyright expiration dates are necessary to ensure the public has the opportunity to advance culture and archive significant works for the historical records. J.M. Barrie’s Peter Pan, Sherlock Holmes, and Winnie the Pooh are just a few examples of copyrighted works or famous characters that have entered the public domain in recent years. 

The January 1, 2024 expiration means creators can basically do anything they want with Mickey, within reason. Mickey fans, for example, could use the mouse’s likeness in videos, stories, or even plastered on t-shirts. Mickey Mouse inspired slasher movies and horror video games are already reportedly in the works

But there are some important exceptions. First, the copyright expiration only applies to the Mickey from 1928, which means the white gloved, bubbly-peyed Mickey most commonly associated with the Disney mouse is still protected under copyright. Creators also can’t use Mickey to create content that could mislead the public into believing their creation is endorsed by Disney. 

Disney has a reputation for aggressively pursuing legal actions against people who violate their copyright and trademarks. Three daycare centers in Florida famously found themselves on the wrong end of a Disney legal threat in 1989 after they reportedly plastered images of Mickey, Minnie, and Donald Duck on their walls.

 “If we were to allow them to use the characters, then we would have to allow everyone else to do so,” Disney spokesperson Chuck Champlin said at the time according to the Chicago Tribune. ”If we don’t protect our trademarks, we could lose our copyright and be out of business.”

This week’s public domain inductees were a long time coming for many copyright critics, thanks in no small part to a controversial piece of 1998 federal legislation backed by Disney. At the time, copyright protections only lasted 50 years after an author’s death or 75 years after creation. Those limits were extended by 20 years following the passage of the Sonny Bono Copyright Term Extension Act. Disney reportedly supported the extension at the time, which led some critics to derisively refer to the law as the “Mickey Mouse Protection Act.” And with that, the Mickey Mouse copyright villain was born. 

“The slow-motion arson attack on the public domain meant that two generations of creators were denied the public domain that every other creator in the history of the human race had enjoyed,” author and Electronic Frontier Foundation Special Adviser Cory Doctorow said of copyright extension in a recent blog post

Disney’s hand wringing around copyright extensions, critics say, are made worse because many of the company’s most iconic works, like Frozen  and The Lion King, actually draw inspiration from poems and music previously entered into the public domain. Mickey Mouse himself, Duke Center for the Study of the Public Domain Director Jennifer Jenkins notes, actually stems in part from the personalities and characteristics of silent film celebrities like Charlie Chaplin and Douglas Fairbanks. 

Disney did not immediately respond to PopSci’s request for comment. 

Large language models like OpenAI’s ChatGPT and various image generators like DALL-E  and Stability AI’s Stable Diffusion currently inject billions of likely copyright protected works to train their model. But a plethora of copyright lawsuits filed by authors and creators could upend that model by making it illegal to train on protected works. 

If the creators succeed, AI companies could be forced to drastically limit the types of IP their models are trained on. Those limitations could make access to material in the public domain essential for future AI models. For now though, AI companies and everyday creators alike can at least rest easy when it comes to reusing the classic Mickey. 

The post Disney finally relinquishes ‘Steamboat’ Mickey Mouse to public domain after testy 95 years appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tech trade group sues over ‘unconstitutional’ Utah teen social media curfew law https://www.popsci.com/technology/lawsuit-utah-teen-social-media-curfew/ Wed, 20 Dec 2023 15:00:00 +0000 https://www.popsci.com/?p=596283
Teen using phone at night after curfew
The law wouldn't just affect minors. DepositPhotos

The state's Social Media Regulation Act is set to take effect March 1, 2024.

The post Tech trade group sues over ‘unconstitutional’ Utah teen social media curfew law appeared first on Popular Science.

]]>
Teen using phone at night after curfew
The law wouldn't just affect minors. DepositPhotos

A trade group associated with Meta, TikTok, and X is fighting back against a Utah law forcing minors to obtain parental consent and abide by a strict curfew in order to access social media. Though lawmakers in Utah and a growing number of other states believe regulations like these are necessary to protect young users from online harms, a new lawsuit filed by NetChoice argues the laws go too far and violate First Amendment rights to free expression. 

Utah officially passed its Social Media Regulation Act back in March. The law, which is set to take effect March 1, 2024, is actually a combination of a pair of bills, SB152 and HB311. Combined, the bills prohibit minors from opening a new social media account without first receiving written parental consent. It also restricts minors from accessing social media between 10:30 p.m. and 6:30 a.m, unless the receive permission from their parent or guardian. Tech platforms would be required to verify the age of its users. Failure to do so could result in a $2,500 fine per violation. 

Utah lawmakers supporting the law say it’s necessary to reduce young users’ exposure to potentially harmful material online such as eating disorder and self-harm related content. Lawmakers say the curfew, one of the more controversial elements of the law, could help ensure minors aren’t having their sleep impacted by excessive social media use. A US Surgeon General advisory report released earlier this year warned of potentially sleep deprivation linked to excessive social media use. 

“While there are positive aspects of social media, gaming, and online activities, there is substantial evidence that social media and internet usage can also be extremely harmful to a young person’s mental and behavioral health and development,” Utah Attorney General Sean Reyes said during a press conference earlier this year. 

NetChoice, in a suit filed Tuesday, claims the provisions violate Utahns’ First Amendment Rights and amounts to a “unconditional attempt to regulate both minors’ and adults’ access to—and ability to engage in—protected expression.” The suit also takes aim at the law’s age verification requirement, which NetChoice argues would violate the privacy of all Utah social media users and ultimately do more harm than good. 

“The state is telling you when you can access a website and what websites you can access,” NetChoice Vice President and General Counsel Carl Szabo told PopSci. “Our founders recognized the dangers in allowing the government to decide what websites we can visit and what apps we can download. Utah is disregarding that clear prohibition in enacting this law.” 

The law wouldn’t just affect minors either. Szabo said the law’s rules forcing platforms to verify the age of users under the age of 18 would, by definition, also result in the verifying the ages of users over the age of 18. Social media companies would be required to use telecom subscriber information, a social security number, government ID, or facial analyses to verify those identities if the law takes effect. 

Aside from its constitutional issues, Szabo and NetChoice argue the bill would harm young users in the state by putting them at a disadvantage to minors in other states who have access to more information. The digital curfew, which the suit refers to as a “blackout” could restrict students from accessing educational videos or news articles during a large chunk of the day. The suit claims the curfew could also interfere with young users trying to communicate across multiple time zones. 

“The first amendment applies to all Americans, not just Americans over the age of 18,” Szabo said. 

NetChoice is calling on courts to halt the law from taking effect while its lawsuit winds its way through the legal system. That could happen. The trade group already successfully petitioned a US District Court to halt a similar parental consent law from going into effect in Arkansas earlier this year. Utah’s Attorney general spokesperson told PopSci, “The State of Utah is reviewing the lawsuit but remains intently focused on the goal of this legislation: Protecting young people from negative and harmful effects of social media use.”

State-wide online parental consent laws and bills regulating minors’ use of social media picked up steam in 2023. Texas, Arkansas, Louisiana, and Ohio have all proposed or passed legislation limiting minors’ access to social media and severely limiting the types of content platforms can serve them. Some state laws, like the one in Utah, would go a step further and  grant adults full access to a child’s account and ban targeted advertising to minors. 

Supporters of these state bills cite a growing body of academic research appearing to draw links between excessive social media use and worsening teen depression rates. But civil liberties organizations like the ACLU say these efforts, though often well intentioned, could wind up backfiring by stifling minors’ freedom of expression and limiting their access to online communities and resources. Szabo, of not NetChoice, said states should step away from online line parental consent laws broadly and instead invest in digital wellness or education campaigns.

The post Tech trade group sues over ‘unconstitutional’ Utah teen social media curfew law appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Amazon’s Project Kuiper successfully tests satellite space lasers https://www.popsci.com/technology/amazon-project-kuiper-space-lasers/ Fri, 15 Dec 2023 20:00:00 +0000 https://www.popsci.com/?p=595756
Amazon says "the OISL network enables it to transfer data from one part of the world to another without touching the ground."
Amazon says "the OISL network enables it to transfer data from one part of the world to another without touching the ground.". YouTube/Amazon

The technology could one day help provide high-speed broadband to ships at sea and campers in the remote wilderness.

The post Amazon’s Project Kuiper successfully tests satellite space lasers appeared first on Popular Science.

]]>
Amazon says "the OISL network enables it to transfer data from one part of the world to another without touching the ground."
Amazon says "the OISL network enables it to transfer data from one part of the world to another without touching the ground.". YouTube/Amazon

SpaceX and its billionaire CEO Elon Musk may finally have a reason to look over their shoulder in the satellite internet race. On Thursday, Amazon revealed it successfully used a space laser technology called “optical inter-satellite link” (OISL) to beam a 100 gigabit per second connection between two of its Project Kuiper satellites stationed 621 miles apart from each other in low Earth orbit. That’s roughly the distance between New York and Cincinnati. Amazon believes that same tech could help it soon deliver fast and reliable broadband internet to some of the most remote regions on Earth. 

Typically, LEO satellites send data between antennas at the customer’s location and ground gateways that connect back to the internet. An OISL eliminates the need for that immediate data downlink to the ground, which can increase internet speed and reduce latency, particularly for end-users in remote areas. The ability to communicate directly between satellites means that, in practical terms, OISLs could bring strong internet connections to cruisers in the ocean or offshore oil rigs many miles away from land.

“With optical inter-satellite links across our satellite constellation, Project Kuiper will effectively operate as a mesh network in space,” Project Kuiper Vice President of Technology Rajeev Badyal said in a statement.

“Mesh networks” generally refer to a group of connected devices that work side-by-side to form a single network. In a press release, Amazon says it plans to outfit its satellites with multiple optical terminals so several of them can connect with each other simultaneously. In theory, that should establish “high-speed laser cross links” that form the basis for a fast mesh network in space. Amazon expects this space-based mesh network should be capable of transferring data around 30% faster than terrestrial fiber optic cables sending data across roughly the same distance. How that actually plays out in practice for everyday users still remains to be seen since Project Kuiper’s services aren’t currently available to consumers.

Amazon launched its first two satellites into orbit in October and carried out the OISL tests in November. The two satellites, KuiperSat-1 and KuiperSat-2, were reportedly able to send and receive data at speeds of roughly 100 gigabits per second for an hour-long test window. The satellites had to maintain that link while moving at up to 15,534 miles per hour. 

Kuiper Government Solution Vice President Ricky Freeman said the network’s ability to provide “multiple paths to route through space” could be particularly appealing to customers “looking to avoid communications architecture that can be intercepted or jammed.” 

When asked by PopSci if the potential customer described here is a military or defense contractor, an Amazon spokesperson said Project Kuiper is focused “first and foremost” on providing internet coverage to residential customers in remote and underserved communities. The spokesperson went on to say it may approach government partners in the future as well. 

“We are committed to working with public and private sector partners that share our commitment to bridging the digital divide,” the spokesperson said. “We’re building a flexible, multi-purpose communications network to serve a variety of customers that will include space and government agencies, mobile operators, and emergency and disaster relief operations.” 

Project Kuiper slowly moving out of the shadows

Project Kuiper launched in 2019 with a goal of creating a constellation of 3,236 satellites floating in low-Earth- orbit. Once completed, Amazon believes the constellation could provide fast and affordable broadband internet previously underserved regions around the globe. But the project has taken its sweet time to actually lift off. After more than four years, the company finally launched its first satellites into orbit in October. As of this month the company had reportedly ordered just 94 rocket launches according to CNBC.

SpaceX, Project Kuiper’s biggest rival, already has a huge head start. The company has reportedly launched more than 5,000 Starlink satellites into space and currently offers its satellite internet service to paying customers. In a surprise twist, Amazon recently struck a deal with its rival where it will use SpaceX rockets to quickly launch more Kuiper satellites into orbit

The new laser tests prove Amazon’s Project Kuiper is indeed much more than a wishful multi-billion dollar side quest. Whether or not it can ramp up satellite deployments in time to catch up with SpaceX, however, remains to be seen.

Correction 12/15/23: An earlier version of this story read that Amazon would bypass the need for a ground link.

The post Amazon’s Project Kuiper successfully tests satellite space lasers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New UK guidelines for judges using AI chatbots are a mess https://www.popsci.com/technology/ai-judges/ Wed, 13 Dec 2023 20:00:00 +0000 https://www.popsci.com/?p=595407
Gavel on top of a computer for a judge
“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.". DepositPhotos

The suggestions attempt to parse appropriate vs. inappropriate uses of LLMs like ChatGPT.

The post New UK guidelines for judges using AI chatbots are a mess appeared first on Popular Science.

]]>
Gavel on top of a computer for a judge
“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts.". DepositPhotos

Slowly but surely, text generated by AI large language models (LLMs) are weaving their way into our everyday lives, now including legal rulings. New guidance released this week by the UK’s Judicial Office provides judges with some additional clarity on when exactly it’s acceptable or unacceptable to rely on these tools. The UK guidance advises judges against using the tools for generating new analyses. However, it allows summarizing texts. Meanwhile, an increasing number of lawyers and defendants in the US find themselves fined and sanctioned for sloppily introducing AI into their legal practices.

[ Related: “Radio host sues ChatGPT developer over allegedly libelous claims” ]

The Judicial Office’s AI guidance is a set of suggestions and recommendations intended to help judges and their clerks understand AI and its limits as the tech becomes more commonplace. These guidelines aren’t punishable rules of law but rather a “first step” in a series of efforts from the Judicial Office to clarify how judges can interact with the technology. 

In general, the new guidance says judges may find AI tools like OpenAI’s ChatGPT useful as a research tool summarizing large bodies of text or for administrative tasks like helping draft emails or memoranda. Simultaneously, it warned judges against using tools to conduct legal research  that relies on new information that can’t be independently verified. As for forming legal arguments, the guidance warns public AI chatbots simply “do not produce convincing analyses or reasoning.” Judges may find some benefits in using an AI chatbot to dig up material they already know to be accurate the guidance notes, but they should refrain from using the tools to conduct new research into topics they can’t verify themselves. It appears the guidance puts the responsibility on the user to tell fact from fiction in the LLMs outputs. 

“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts,” the guidance reads. 

The guidance goes on to warn judges that AI tools can spit out inaccurate, incomplete, or biased information–even if they are fed highly detailed or scrupulous prompts. These odd AI fabrications are generally referred toas “hallucinations.” Judges are similarly advised against entering any “private or confidential information” into the service because several of them are “open in nature.” 

“Any information that you input into a public AI chatbot should be seen as being published to all the world,” the guidance reads. 

Since the information spat up from a prompt is “non-definitive” and potentially inaccurate, while information fed into the LLM must not include “private” information that is potentially key to a full review of, say, a lawsuit’s text, it is not quite clear what actual use it would serve in the legal context. 

Context dependent data is also an area of concern for the Judicial Office. The most popular AI chatbots on market today, like OpenAI’s ChatGPT and Google’s Bard, were developed in the US and with a large corpus of US focused data. The guidance warns that emphasis on US training data could give AI models a “view” of the law that’s skewed towards American legal contexts and theory. Still, at the end of the day, the guidance notes, judges are still the ones held responsible for material produced in their name, even if it was done so with the assistance of an AI tool. 

Geoffrey Vos, the Head of Civil Justice in England and Wales, reportedly told Reuters ahead of the guidance reveal that he believes AI “provides great opportunities for the justice system.” He went on to say he believed judges were capable of spotting legal arguments crafted using AI.

“Judges are trained to decide what is true and what is false and they are going to have to do that in the modern world of AI just as much as they had to do that before,” Vos said according to Reuters. 

Some judges already find AI ‘jolly useful’ despite accuracy concerns

The new guidance comes three months after a UK court of appeal judge Lord Justice Birss used ChatGPT to provide a summary of an area of law and then used part of that summary to write a verdict. The judge reportedly hailed the ChatGPT as “jolly useful,” at the time according to The Guardian. Speaking at a press conference earlier this year, Birss said he should still ultimately be held accountable for the judgment’s content even if it was created with the help of an AI tool. 

“I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else,” Birss said according to The Law Gazette. “All it did was a task which I was about to do and which I knew the answer and could recognise as being acceptable.” 

A lack of clear rules clarifying when and how AI tools can be used in legal filings has already landed some lawyers and defendants in hot water. Earlier this year, a pair of US lawyers were fined $5,000 after they submitted a court filing that contained fake citations generated by ChatGPT. More recently, a UK woman was also reportedly caught using an AI chatbot to defend herself in a tax case. She ended up losing her case on appeal after it was discovered case law she had submitted included fabricated details hallucinated by the AI model. OpenAI was even the target of a libel suit earlier this year after ChatGPT allegedly authoritatively named a radio show host as the defendant in an embezzlement case that he had nothing to do with. 

[ Related: “EU’s powerful AI Act is here. But is it too late?” ] 

The murkiness of AI in legal proceedings might get worse before it gets better. Though the Biden Administration has offered proposals governing the deployment of AI in the legal settings as part of his recent AI Executive Order, Congress still hasn’t managed to pass any comprehensive legislation setting clear rules. On the other side of the Atlantic, The European Union recently agreed on its own AI Act which introduces stricter safety and transparency rules for a wide range of AI tools and applications that are deemed “high risk.” But the actual penalties for violating those rules likely won’t see the light of day until 2025 at the earliest. So, for now, judges and lawyers are largely flying by the seat of their pants when it comes to sussing out the ethical boundaries of AI use. 

The post New UK guidelines for judges using AI chatbots are a mess appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
EU’s powerful AI Act is here. But is it too late? https://www.popsci.com/technology/ai-act-explained/ Tue, 12 Dec 2023 20:05:00 +0000 https://www.popsci.com/?p=595230
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database.
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. DepositPhotos

Technology moves faster than ever. AI regulators are fighting to keep up.

The post EU’s powerful AI Act is here. But is it too late? appeared first on Popular Science.

]]>
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database.
The framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. DepositPhotos

European Union officials made tech policy history last week by enduring 36 hours of grueling debate in order to finally settle on a first of its kind, comprehensive AI safety and transparency framework called the AI Act. Supporters of the legislation and AI safety experts told PopSci they believe the new guidelines are the strongest of their kind worldwide and could set an example for other nations to follow.  

The legally binding frameworks set crucial new transparency requirements for OpenAI and other generative AI developers. It also draws several red lines banning some of the most controversial uses of AI, from real-time facial recognition scanning and so-called emotion recognition to predictive policing techniques. But there could be a problem brewing under the surface. Even when the Act is voted on, Europe’s AI cops won’t actually be able to enforce any of those rules until 2025 at the earliest. By then, it’s anyone’s guess what the ever-evolving AI landscape will look like. 

What is the EU AI Act? 

The EU’s AI Act breaks AI tools and applications into four distinct “risk categories” with those placed on the highest end of the spectrum exposed to the most intense regulatory scrutiny. AI systems considered high risk, which would include self-driving vehicles, tools managing critical infrastructure, medical devices, and biometric identification systems among others, would be required to undergo fundamental rights impact assessments, adhere to strict new transparency requirements, and must be registered in a public EU database. The companies responsible for these systems will also be subject to monitoring and record keeping practices to ensure EU regulators the tools in question don’t pose a threat to safety or fundamental human rights. 

It’s important here to note that the EU still needs to vote on the Act and a final version of the text has not been made public. A final vote for the legation is expected to occur in early 2024. 

“A huge amount of whether this law has teeth and whether it can prevent harm is going to depend on those seemingly much more technical and less interesting parts.”

The AI Act goes a step further and bans other use cases outright. In particular, the framework prohibits mass, untargeted scraping of face images from the internet or CCTV footage to create a biometric database. This could potentially impact well known facial recognition startups like Clearview AI and PimEyes, which reportedly scrape the public internet for billions of face scans. Jack Mulcaire, Clearview AI’s General Counsel, told PopSci it does not operate in or offer its products in the EU. PimEyes did not immediately respond to our request for comment. 

Emotion recognition, which controversially attempts to use biometric scans to detect an individual’s feeling or state of mind, will be banned in the workplace and schools. Other AI systems that “manipulate human behavior to circumvent their free will” are similarly prohibited. AI-based “social scoring” systems, like those notoriously deployed in mainland China, also fall under the banned category.

Tech companies found sidestepping these rules or pressing on with banned applications could see fines ranging between 1.5% and 7% of their total revenue depending on the violation and the company’s size. This penalty system is what gives the EU AI Act teeth and what fundamentally separates it from other voluntary transparency and ethics commitments recently secured by the Biden Administration in the US. Biden’s White House also recently signed a first-of-its kind AI executive order laying out his vision for future US AI regulation

In the immediate future, large US tech firms like OpenAI and Google who operate “general purpose AI systems” will be required to keep up EU officials up to date on how they train their models, report summaries of the types of data they use to train those models, and create a policy acknowledging they will agree to adhere to EU copyright laws. General models deemed to pose a “systemic risk,” a label Bloomberg estimates currently only includes OpenAI’s GPT, will be subject to a stricter set of rules. Those could include requirements forcing the model’s maker to report the tool’s energy use and cybersecurity compliance, as well as calls for them to perform red teaming exercises to identify and potentially mitigate signs  of systemic risk. 

Generative AI models and capable of creating potentially misleading “deepfake” media will be required to clearly label those creations as AI-generated. Other US AI companies that create tools falling under the AI Act’s “unacceptable” risk category would likely no longer be able to continue operating in the EU when the legislation officially takes effect. 

[ Related: “The White House’s plan to deal with AI is as you’d expect” ]

AI Now Institute Executive Director Amba Kak spoke positively about the enforceable aspect of the of the AI Act, telling PopSci it was a “crucial counterpoint in a year that has otherwise largely been a deluge of weak voluntary proposals.” Kak said the red lines barring particularly threatening uses of AI and new transparency and diligence requirements were a welcome “step in the right direction.” 

Though supporters of the EU’s risk-based approach say it’s helpful to avoid subjecting  more mundane AI use cases to overbearing regulation, some European privacy experts worry the structure places too little emphasis on fundamental human rights and detracts from past the approach of psst EU legislation like the 2018 General Data Protection Regulation (GDPR) and the Charter of Fundamental Human Rights of the European Union (CFREU).

“The risk based approach is in tension with the rest of the EU human rights frameworks, “European Digital Rights Senior Policy Advisor Ella Jakubowska told PopSci during a phone interview. “The entire framework that was on the table from the beginning was flawed.” 

The AI Act’s risk-based approach, Jakubowska warned, may not always provide a full, clear picture of how certain seemingly low risk AI tools could be used in the future. Jakubowska said rights advocates like herself would prefer mandatory risk assessments for all developers of AI systems.

“Overall it’s very disappointing,” she added. 

Daniel Leufer, a Senior Policy Analyst for the digital rights organization AccessNow echoed those concerns regarding the risk-based approach, which he argues were designed partly as a concession to tech industry groups and law enforcement. Leufer says AccessNow and other digital rights organizations had to push EU member states to agree to include “unacceptable” risk categories, which some initially refused to acknowledge. Kak, the AI Now Institute Executive Director, went on to say the AI Act could have done more to clarify regulations around AI applications in law enforcement and national security domains.

An uncertain road ahead 

The framework agreed upon last week was the culmination of years’ worth of back and forth debate between EU member states, tech firms, and civil society organizations. First drafts of the AI Act date back to 2021, months before OpenAI’s ChatGPT and DALL-E generative AI tools enraptured the minds of millions. The skeleton of the legislation reportedly dates back even further still to as early as 2018. 

Much has changed since then. Even the most prescient AI experts would have struggled to imagine witnessing hundreds of top technologists and business leaders frantically adding their names to impassioned letters urging a moratorium on AI tech to supposedly safeguard humanity. Few similarly could have predicted the current wave of copyright lawsuits lodged against generative AI makers questioning the legality of their massive data scraping techniques or the torrent of AI-generated clickbait filling the web. 

Similarly, it’s impossible to predict what the AI landscape will look like in 2025, which is the earliest the EU could actually enforce its hefty new regulations. Axios notes EU officials will urge companies to agree to the rules in the meantimes, but on a voluntary basis.

Update 1/4/24 2:13PM: An earlier version of this story said Amba Kak spoke positively about the EU AI Act. This has been edited to clarify that she specifically spoke favorably about the enforceable aspect of the Act.

The post EU’s powerful AI Act is here. But is it too late? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>