Tech Regulation - Policies & Trends Shaping the Industry https://techinformed.com/tag/regulation/ The frontier of tech news Mon, 16 Dec 2024 15:01:12 +0000 en-US hourly 1 https://i0.wp.com/techinformed.com/wp-content/uploads/2021/12/logo.jpg?fit=32%2C32&ssl=1 Tech Regulation - Policies & Trends Shaping the Industry https://techinformed.com/tag/regulation/ 32 32 195600020 UK’s internet watchdog unveils online criminal crackdown https://techinformed.com/uks-internet-watchdog-unveils-online-criminal-crackdown/ Mon, 16 Dec 2024 15:01:12 +0000 https://techinformed.com/?p=28503 The UK’s communications regulator Ofcom, has given social media giants such as Facebook parent Meta and TikTok owner ByteDance a three-month deadline to address illegal… Continue reading UK’s internet watchdog unveils online criminal crackdown

The post UK’s internet watchdog unveils online criminal crackdown appeared first on TechInformed.

]]>
The UK’s communications regulator Ofcom, has given social media giants such as Facebook parent Meta and TikTok owner ByteDance a three-month deadline to address illegal activities on their platforms.

The regulator said it will leverage powers granted to it under the UK’s Online Safety Act to introduce rules to combat criminal harms, including terrorism, fraud, hate speech, child sexual abuse, and the encouragement of suicide.

The new safety requirements will apply to various types of online services, including social media platforms, search engines, messaging apps, gaming and dating platforms, as well as pornography and file-sharing sites.

Companies have until March 17, 2025, to implement the safety measures.

Changes firms must make include designating a senior leader within their top governance team who will be responsible for ensuring compliance with the rules around illegal content, as well as the reporting and handling of complaints.

It also requires tech firms to ensure their moderation teams are appropriately resourced and trained. This means setting performance targets in order to remove illegal material swiftly, making reporting and complaints functions easier for users to find and use, and optimising algorithms to ensure illegal content is harder to distribute.

Child safety online

 

The new codes also aim to enforce measures to protect children from sexual abuse and exploitation online.

This will mean platforms should ensure children’s accounts and locations are not visible to users other than their friends, as default.

Children must also receive information from the platforms to educate them on the risk of sharing personal information, and children’s accounts should not be suggested as connections.

The online watchdog quotes children from the age of 14-17 who are said to have received messages asking for bikini photos in exchange for money, or other unwanted invitations.

“I don’t want my siblings to go through what I did on social media. I feel happy about these measures because I know that my sisters and siblings would feel safe,” said one girl, aged 14.

Another 14 year old added: “[This will be] effective because no more strangers can be added, there are no more creeps sending things, and it will decrease grooming.”

According to an Ofcom study, many young people felt interactions with strangers, including adults or users perceived to be adults, are currently an inevitable part of being online—they described becoming ‘desensitised’ to receiving sexualised messages.

Fraud and terrorism

 

Ofcom also aims to tackle fraud by ensuring sites and apps establish a dedicated reporting channel for organisations with fraud expertise.

The regulator said that this would allow them to flag known scams to platforms in real time so that action can be taken.

It also requires sites to remove users and accounts that spread terrorist content.

“For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits,” said Melanie Dawes, Ofcom’s chief executive.

“The safety spotlight is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year,” she added.

The UK Parliament set Ofcom a deadline of 18 months after the Online Safety Act was passed, on October 26th, 2023, to finalise its illegal harms and children’s safety codes of practice and guidance.

The post UK’s internet watchdog unveils online criminal crackdown appeared first on TechInformed.

]]>
28503
Why we can’t talk about AI without talking about trust https://techinformed.com/why-we-cant-talk-about-ai-without-talking-about-trust/ Sun, 08 Dec 2024 09:30:06 +0000 https://techinformed.com/?p=28224 AI is becoming increasingly common in the tech stack, and there’s little doubt that it’s a game-changer for productivity. Most global employees (70%) are willing… Continue reading Why we can’t talk about AI without talking about trust

The post Why we can’t talk about AI without talking about trust appeared first on TechInformed.

]]>
AI is becoming increasingly common in the tech stack, and there’s little doubt that it’s a game-changer for productivity.

Most global employees (70%) are willing to use AI to help manage their workload — a productivity boost that will be particularly welcome in the UK, where productivity levels are dwindling.

But as AI adoption accelerates, can we be sure our relationship with it is headed in the right direction? Vanta’s State of Trust 2024 Report details how AI governance and risk management are still relatively nascent.

For instance, only 43% of UK organisations currently conduct or are in the process of conducting regular AI risk assessments.

The EU AI Act may shift in the right direction, but there is concern over how the act will adapt to evolving AI threats. Moreover, UK organisations do not have to comply with it unless they operate within Europe.

This means a lot rests on UK organisations’ self-regulation. However, when it comes to formal policies for governing AI usage, only 42% of them have or are in the process of implementing a company AI policy.

This indicates the lag between security and the increased use of AI tools — and the need to discuss trust when discussing AI.

The risks of AI without trust

 

While AI is helping drive productivity, it also complicates the security landscape. Most UK businesses (66%) plan to invest more in security around AI use within their organisation in the next year.

This is a clear reaction to the fact that since the proliferation of AI, cyber-risks and threats have gone up — with businesses reportedly experiencing more phishing attacks (35%), a rise in AI-based malware (34%) and more compliance violations (27%).

As if that weren’t enough, the potential damage of using AI without risk management and an in-house policy goes beyond data breaches and cyber threats — it hits a company where it hurts most: its reputation.

More than half (53%) of UK organisations say that customer trust results from good security practices, meaning they must do more to protect it.

Demonstrating trust in the age of AI

 

Security professionals are already under pressure from a challenging security landscape and the burden of manual compliance tasks.

AI is helping while further complicating this, and teams can’t be expected to take on more without losing their focus on mission-critical work. But equally, they can’t afford to ignore the problem either.

Below are three ways for organisations to ensure they maintain a baseline of cybersecurity readiness at all times that go beyond the basic requirements of the EU AI Act.

This includes ensuring that their use of AI remains compliant and they are reaping the benefits without exposing themselves to unnecessary risks.

1. Strengthen their entire trust network

 

Trust is not just a reflection of an organisation. It also extends to its partners and the wider business ecosystem.

For instance, almost two-thirds (63%) of UK businesses agree that third-party breaches negatively impact their organisation’s reputation.

AI is set to complicate this further as more companies add AI to their tech stack and/or develop their own tools.

Therefore, UK organisations must strengthen their entire trust network to maintain trust.

Companies must raise the security bar and build a bespoke standard of trust that centralises visibility, however and wherever AI is being used.

This is how they can define good security — for themselves and the companies they work with.

2. Take tools (and customers and employees) seriously

 

Within the workplace, half of UK organisations (49%) have concerns about the use of AI and the risks it poses for the organisation’s security.

Plus, there are increasing news stories surrounding the consent-free use of customer data to train AI. In fact, our research found that while 29% of UK organisations require opt-in from customers to use their data for AI training, 74% of companies don’t offer an opt-out option.

When trust and transparency matter so much in and outside of an organisation, it’s imperative that companies go beyond the standard when demonstrating trust.

3. Power trust with automation and AI

 

The irony of AI complicating the day job of your security team is that it can also play a crucial role in helping relieve the compliance burden they face.

When asked about AI’s transformative impact, UK leaders listed the most transformative areas as improving the accuracy of security questionnaires (45%), streamlining vendor risk reviews and onboarding (43%), eliminating manual work (37%), and reducing the need for large teams (32%).

This shows just how transformative AI can be in unlocking efficiencies for security teams and ultimately driving business value for organisations.

By looking to AI as the solution rather than just the problem, organisations can do more with less.

This can transform the compliance burden security teams face, and companies can protect themselves against themselves and focus on mission-critical work instead.

This is how organisations keep pace with where the world is headed and ensure that trust goes on the journey with them.

The post Why we can’t talk about AI without talking about trust appeared first on TechInformed.

]]>
28224
Shaping AI’s future: can the world agree on regulation? https://techinformed.com/shaping-ais-future-can-the-world-agree-on-regulation/ Tue, 19 Nov 2024 14:33:52 +0000 https://techinformed.com/?p=27586 Two years on since the launch of ChatGPT and the growth of AI has been rapid. As rapid, in fact, as our concerns about our… Continue reading Shaping AI’s future: can the world agree on regulation?

The post Shaping AI’s future: can the world agree on regulation? appeared first on TechInformed.

]]>

Two years on since the launch of ChatGPT and the growth of AI has been rapid. As rapid, in fact, as our concerns about our ability to control it.

Alongside the promise that it is the next industrial revolution, the gateway to a new era of productivity and economic growth, come fears that it poses at worst an existential threat to humanity or more likely a powerful tool for bad actors to do harm.

This is where AI regulation comes in, to help create a series of rules designed to allow us to reap the benefits of AI in terms of its potential to transform our lives economically and socially but protects us from its dangers.

Major economies and political institutions have been grappling with this challenge for some time and some clear differences have emerged, with the US mostly favouring light-touch regulation with the aim of allowing the industry to grow.

In contrast the EU has gone for top-down, heavier regulation with more emphasis on safety and protection.

The Trump 2.0 approach

While the headlines over Trump’s re-election in the US have gravitated towards the differences between Biden/Harris and Trump, when it comes to AI there remains plenty of overlap.

Under the Biden administration, AI regulation has been fairly light touch with many of the executive orders regarding the threat of AI amounting to voluntary codes rather than the EU’s hard and fast rules with financial penalties.

Trump 2.0, advised by tech figures such as Elon Musk, is likely to advocate a deregulatory stance, contrasting with the EU approach, but with specific areas of concern around AI safety and national security.

IOS
Trump likely to continue down dereguation path, but with safety and national security concerns

 

“I would be surprised if you saw the administration push for federal AI legislation, but I think we will see a stronger embrace around areas such as Defence, National Security and Intelligence,” says US technology expert Bill Whyman, principal of Tech Dynamics LLC and a former senior manager at Amazon Web Services.

“It will be agency-specific regulation, but there will be much more caution around general prescriptive rules.”

Director of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS) Gregory Allen argues that Trump will be more about continuation than redirection, continuing the broad thrust of policy under Biden.

He will tweak 2023 executive orders about preventing AI technologies from limiting civil liberties but overall maintain the voluntary codes, says Allen, who was formerly director of strategy and policy at the Department of Defense (DOD) Joint Artificial Intelligence Center.

Trump’s focus will be deregulation to enable the US to ramp up to meet the energy needs of the fast-growing industry, enabling Big Tech companies like Amazon and Google to build power stations in the US rather than elsewhere, with all the jobs that that will create.

Trump might be deregulatory but not at the expense of safety, given that one of his major influences will be OpenAI pioneer Elon Musk, who has often expressed concerns about the uncontrolled development of AI on human extinction grounds, added Allen in a recent CSIS AI Policy Podcast.

Last week it was announced that Trump has handed Musk an efficiency role in the next government.

The other area where Trump might take an active interest is in semiconductor export rules. Biden has sought to control Chinese access to some of the US’s most powerful chips and Trump is likely to continue this, perhaps using it as a bargaining chip over the general subject of Chinese imports tariffs.

Stark contrasts

The US approach stands in stark contrast to the AI regulation already in place in the EU and countries such as China, which both offer more top-down regulation.

The likely result will be a fragmented global regulatory environment, creating trade frictions between the US and its trading partners despite early efforts at cooperation. A situation made even more likely by Trump’s promise to introduce trading tariffs.

With the EU’s AI Act likely to influence other major nations, US leadership in AI development could get more difficult, with the likes of Amazon, Apple, Google, Meta, Microsoft, and Nvidia facing multiple AI regimes around the world.

The EU’s AI Act

The European Commission is the organisation which has gone furthest fastest in producing a regulatory framework in its AI Act, billed as being the most comprehensive set of rules to date for managing the many risks that AI could create.

Turing prize-winning “godfathers” of generative AI, Yoshua Bengio and Geoffrey Hinton have warned of the dramatic risks that threaten humanity’s existence. Bengio, Hinton, Bill Gates, top executives from Google, Microsoft, OpenAI and many AI luminaries from China and Russia have signed an Open letter stating “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The precautionary principle says that low-probability, high-impact outcomes should be taken seriously.

European Commission executive vice president Margrethe Vestager, who headed up the EU AI Act’s development, admitted back in 2023 that while the threat of extinction shouldn’t be dismissed, the bigger threat is that AI tools may be used to discriminated against groups of people in society, perpetuating social inequality.

“There is also a significant risk that the criminal sector digitises much faster than the rest of us,” she added. “Developments in voice recognition mean that if your social feed can be scanned to profile you then the risk of being manipulated is enormous. If we end up in a situation where we believe nothing [is real] then we have undermined our society.”

Four-tier approach

The AI Act attempts to differentiate between several types of risk by assessing it into four levels; unacceptable risk; high risk; limited risk and minimal or no risk. The higher the risk the stricter the rules.

‘Unacceptable risk’ refers to AI which deploys subliminal techniques to distort behaviour, or to exploit the vulnerabilities of specific groups based on age, disability or racial grouping. Other high-risk areas are AI systems that manipulate people’s decisions or persuade them to engage in unwanted behaviours or use biometric data to infer things about them.

Generative AI like ChatGPT has additional requirements including disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.

The Act, which will be fully integrated by 2026, has teeth; those who break the rules risk penalties up to 40m Euros or 7% of annual turnover.

According to Vestager, the EU’s AI Act points the way for a global dialogue on how to regulate. Speaking at an EC workshop on AI this summer, Vestager pointed out that the EC is hot on regulating other forms of risk that go together with AI development, namely the risk of a handful of existing Big Tech companies using their market power to dominate a growing industry.

Margreth Vestager, EC's executive vice president
EC executive vice president Margreth Vestager

 

“This isn’t like 20 years ago when we were just figuring out the internet and the digital economy was just budding,” declared Vestager. “ We are now dealing with existing market power and all the issues that come with it.”

“Strong competition enforcement is always needed at times of big industrial and tech changes. It is then that markets can tip, that monopolies can be formed, and that innovation can be snuffed out.”

Why Europe has no Big Tech players

Not everybody is supportive of the EC’s AI Act, notably powerful voices in America’s tech world, as AJ Bell investment director Russ Mould points out.

“The US Tech company view boils down to: “ If you are wondering why Europe has no big tech companies, now you know. As soon as a new one appears they try to regulate it reign it in and protect existing industries.”

Nicolai Tangen, CEO of Norges Bank Investment Management, told the Financial Times’ Unhedged podcast recently that heavy AI regulation in Europe meant it was likely that the big tech players would continue to come from the US. “In America, you have lots of AI and little regulation, and in Europe, you have little AI and lots of regulation,” he said.

AI enthusiast, influential venture capitalist and Trump proselyte Marc Andreessen sees the drive to regulate AI as based in hysterical, irrational fear and adds that Big Tech in general is pushing AI regulation to protect their economic interests. That’s because big companies have the resources to cope with extensive AI rules, while small companies and start-ups (that venture capitalists invest in) mostly do not, says Andreessen.

The real risk is that Big Tech companies with an interest in AI are allowed to achieve ‘regulatory capture’, establishing a government-protected cartel that is insulated from market competition, say the venture capitalists.

Top-down or bottom-up

Tech sector expert Bill Whyman, who wrote a report on AI regulation for the Centre for Strategic and International Studies, argues that top-down regulation is not necessarily the way to go, particularly in the US market.

“Yes, you need rules to protect people and make it sustainable, but being first to regulate is not a good or bad thing in itself. People who tend to favour AI regulation (whether in the US or EU) think that strong national top-down rules are good, but I don’t think that necessarily fits the US case.”

“A bottom-up decentralised approach has advantages and that shouldn’t be dismissed, especially at the initial development stage of an emerging technology. So, while it’s attractive to want to regulate, you need to figure out what you want to do first, then figure out the governance structure to implement that.”

Whyman argues that any rules need to reflect the difference between legitimate businesses seeking to comply with rules and malicious actors with bad intent, who need tougher rules and punitive deterrents.

For other lower risk areas, pre-approval or licencing of AI won’t produce the desired innovation, he declares.

“All say they want innovation, but pre-approval or licensing deliberately creates barriers to market entry and hence works against innovation and open competition. Outside of high-risk areas, a lighter-touch pre-release notification system may achieve similar goals.”

The post Shaping AI’s future: can the world agree on regulation? appeared first on TechInformed.

]]>
27586
What does Big Tech stand to gain or lose under a second Trump presidency? https://techinformed.com/big-tech-second-trump-term-impact/ Tue, 12 Nov 2024 10:39:22 +0000 https://techinformed.com/?p=27443 Big Tech is currently assessing the advantages and disadvantages of a second Trump term, which may lead to antitrust actions being deprioritised while introducing new… Continue reading What does Big Tech stand to gain or lose under a second Trump presidency?

The post What does Big Tech stand to gain or lose under a second Trump presidency? appeared first on TechInformed.

]]>
Big Tech is currently assessing the advantages and disadvantages of a second Trump term, which may lead to antitrust actions being deprioritised while introducing new aspects of a global trade war.

There is no doubt that one Big Tech titan is already a big winner from Donald Trump’s election Presidential election victory — Elon Musk’s currency is literally and metaphorically riding high at Mar-a-Lago, Trump’s Florida base, where he is planning his transition to power.

But what about other Big Tech leaders? Google’s Sundar Pichai, Apple’s Tim Cook, and Meta’s Mark Zuckerberg were all seen cosying up to Trump during the election campaign, but it remains to be seen how Trump will handle the growing power and influence of Big Tech.

Meanwhile, Amazon boss Jeff Bezos came under fire after the newspaper he owns — the Washington Post — opted not to endorse Trump or his rival Kamala Harris, on the same day the billionaire met with the incoming President to discuss business.

Says Bill Whyman, tech industry expert and former senior manager at Amazon Web Services: “What do we know about Trump from his first presidency is that he showed a willingness to disregard conventional wisdom and at times overruled the advice of his own experts. Which creates more uncertainty and unpredictability.”

 

From antitrust to tariffs

 

So, what do we know? Trump has an increasingly close relationship with tech entrepreneur Elon Musk, with Musk likely to exert considerable influence on how Trump’s relationship with Big Tech evolves.

So far, Trump has said he’ll make drastic reforms to the entire federal government, and he has discussed granting Musk huge power over agencies which regulate his and other tech companies.

Trump will take office with a series of antitrust cases underway, challenging the market power of several big tech firms, headed by the anti-monopoly chair of the Federal Trade Commission, Lina Khan.

Many expect Khan to make way for another head of the FTC and for the antitrust actions against players such as Google and Microsoft, which characterised the Biden administration, to take more of a back seat.

Trump will focus more on Big Tech’s contribution to the US economy. Still, that contribution could be hugely affected by one of Trump’s favourite themes of the election campaign — tariffs.

Trump and Musk rank as most deepfaked figures ahead of 2024 US election

He’s threatened to slap China with 60-100% tariffs on goods, and the prospect of a global trade war looms — a trade war that neither Big Tech nor the US consumer will appreciate as it’s likely to re-stoke inflationary pressure on the economy.

Undoubtedly, Big Tech, Apple, and Tesla, especially, have a lot riding on continued access to the Chinese market and supply chains.

Apple’s manufacturing relies heavily on Chinese facilities, while Tesla counts China as one of its fastest-growing markets.

Any downturn in US-China relations under Trump could trigger a backlash that might slow demand and disrupt production.

Anyone interested in Trump’s approach to US-China relations is looking at the situation with TikTok; the social platform must find a foreign buyer for its Chinese owner or risk being banned from the USA.

He could leave the current administration’s decision to force owner ByteDance to sell in place or use it as a bargaining chip with the Chinese government, a move that would fit in well with his reputation as a dealmaker.

“What happens with TikTok will be a good guideline as to what might be expected over the next four years,” says lead technology analyst at the Economist Intelligence Unit, Dexter Thillien.

 

Lighter regulation

 

One subject that is much more predictable is regulation, which is expected to be much lighter under Trump. AI, in particular, is one area which is expected to benefit from less top-down regulation.

Silicon Valley venture capitalists Mark Andreessen and Ben Horowitz each donated $2.5 million to the Trump campaign, and Andreessen — historically a strong supporter of the Democrats — switched to Trump in this election because he disagreed with White House plans to “over-regulate” AI, a move which Andreessen insists will stifle innovation.

A top-down heavier regulatory approach certainly can favour the market power of existing Big Tech players because – unlike the start-ups – they have the resources and expertise to deal with it, says Bill Whyman.

He’ll likely push for a deregulatory approach in isolation, focussed on competition with China and less on protecting citizens’ values and rights.

This will contrast the EU and UK approaches, which are more interventionist. For instance, the EU’s AI Act has already created a clear framework for the future regulation of the industry.

Another sector set to benefit in the deregulatory world of Trump 2.0 will be crypto.

Previously a cryptocurrency sceptic, Trump is now its biggest fan, no doubt partly down to the influence of crypto and Dogecoin-enthusiast Elon Musk.

The crypto industry hopes that after big donations to the Trump campaign, the regulatory hurdles holding back the cryptocurrency revolution will ease up.

Trump has even suggested that crypto might help pay off the US’s increasing pile of government debt. But as they say, if a deal seems too good to be true, it probably is.

The post What does Big Tech stand to gain or lose under a second Trump presidency? appeared first on TechInformed.

]]>
27443
Trump vs Harris: Key tech policies in the US presidential election https://techinformed.com/us-election-2024-tech-policy-trump-harris-ai-cybersecurity/ Fri, 01 Nov 2024 12:44:57 +0000 https://techinformed.com/?p=27237 With the 2024 US presidential election looming and polls suggesting a tight result, the Electoral College will ultimately determine the winner, not the popular vote.… Continue reading Trump vs Harris: Key tech policies in the US presidential election

The post Trump vs Harris: Key tech policies in the US presidential election appeared first on TechInformed.

]]>
With the 2024 US presidential election looming and polls suggesting a tight result, the Electoral College will ultimately determine the winner, not the popular vote.

The election sees former Republican President Donald Trump take on Democrat Vice President Kamala Harris, with polls due to close on Tuesday evening — some votes have already been cast.

Amidst the tension, the technology sector braces itself for policies that could reshape its future.

A recent EY survey highlights that 74% of tech industry leaders believe the election results will significantly impact the industry’s ability to compete globally, with AI, cybersecurity, trade policies, and regulatory frameworks among the areas under scrutiny.

Additionally, concerns over AI-driven disinformation, including deepfakes used for voter manipulation, have escalated, underscoring technology’s critical role in both the campaign and electoral process.

As the world eagerly anticipates the election outcome, TechInformed analyses the candidates’ interests in ten key technology areas:

1. Artificial Intelligence (AI) and Automation

 

Donald Trump

  • Proposes minimal AI regulation to maintain US innovation dominance
  • Prioritises defence AI applications with a focus on national security
  • Supports open-source AI development with limited federal oversight
  • Encourages private sector-led job training for AI-affected roles
  • Sees AI as an economic driver, supporting industry-led training for AI jobs

Kamala Harris

  • Promotes AI regulation for ethical use and to mitigate algorithmic harm
  • Supports transparency and accountability standards for AI algorithms
  • Plans to align AI use with privacy and civil rights safeguards
  • Favours partnerships with tech firms for standardised AI ethics
  • Likely to prioritise a national AI ethics framework, ensuring compliance

2. Cybersecurity

 

Donald Trump

  • Focuses on a defence-driven cybersecurity approach for critical infrastructure
  • Advocates expanding military cyber capabilities against foreign threats
  • Supports limited federal oversight on private-sector cybersecurity practices
  • Encourages R&D collaborations with tech firms for cyber-defence
  • Plans to increase investment in cyber units to protect national security

Kamala Harris

  • Advocates federal cybersecurity standards, particularly for critical industries
  • Supports transparency on data breaches, especially regarding sensitive data
  • Favours global collaboration to tackle cybersecurity threats internationally
  • Emphasises public-private partnerships for cybersecurity workforce development
  • Supports federal funding for research in advanced cyber defence

3. Big Tech and Antitrust Actions

 

Donald Trump

  • Criticises Big Tech, accusing firms of anti-conservative bias in content moderation
  • Supports ongoing antitrust actions against monopolistic companies
  • Emphasises free-market competition but targets companies seen as censoring conservative voices
  • Likely to relax antitrust enforcement for firms he deems aligned with free-market values
  • Advocates transparency for content moderation practices in Big Tech platforms

Kamala Harris

  • Continues Biden’s aggressive stance on Big Tech monopolies
  • Likely to pursue strict antitrust laws preventing anti-competitive mergers
  • Supports greater transparency in tech firms’ data privacy and competitive practices
  • Favours a level playing field for emerging businesses in tech sectors
  • Advocates for oversight of tech business practices to ensure fair competition

4. Data Privacy and Consumer Protection

 

Donald Trump

  • Advocates limited federal intervention, favouring industry-led data standards
  • Opposes extensive data privacy laws, suggesting market-driven solutions
  • Supports individual rights without enforcing strict federal mandates
  • Proposes minimal scrutiny over data handling to stimulate innovation
  • Favours self-regulation in data security within tech companies

Kamala Harris

  • Strongly backs federal data privacy legislation to protect consumers
  • Supports strict penalties for data breaches to ensure compliance
  • Favours encryption and security standards for protecting consumer data
  • Promotes alignment with international privacy standards for consistency
  • Advocates for transparency in companies’ data-sharing practices

5. US-China Tech Relations

 

Donald Trump

  • Strongly opposes Chinese tech influence, supporting tariffs on goods from companies like Huawei and ZTE
  • Plans to ban Chinese-owned platforms like TikTok, citing national security concerns
  • Emphasises restricting exports of sensitive tech like AI and semiconductors to China
  • Seeks supply chain independence from China in critical tech sectors
  • Proposes tariffs on Chinese tech firms considered security risks

Kamala Harris

  • Likely to maintain export restrictions on high-tech items with a “targeted” approach
  • Focuses on coalition-building with allies on tech policies with China
  • Aims to reduce US dependency on Chinese-made tech components
  • Seeks to address human rights issues tied to Chinese surveillance tech
  • Supports investments in US manufacturing to counterbalance China’s tech influence

6. Semiconductors and Chips

 

Donald Trump

  • Plans to increase domestic production, imposing tariffs on foreign-made semiconductors
  • Suggests incentives for American companies to manufacture semiconductors in the US
  • Proposes loosening regulations to accelerate domestic chip facility construction
  • Supports defence-related partnerships to secure the US chip supply
  • Envisions stringent controls on semiconductor exports to nations deemed threats

Kamala Harris

  • Strongly supports the CHIPS Act for US semiconductor production resilience
  • Backs federal funding for semiconductor R&D and advanced manufacturing
  • Aims to strengthen alliances to secure a collaborative semiconductor supply chain
  • Seeks to minimise environmental impact with sustainable chip production
  • Plans to increase STEM programs with a focus on semiconductor technology

7. Military Technology

 

Donald Trump

  • Prioritises AI and automation within military systems to enhance the US defence capabilities
  • Supports increased defence funding for R&D in areas like drone technology and autonomous vehicles
  • Encourages private-sector partnerships for faster development of military-grade technology
  • Seeks minimal regulatory oversight for defence contractors to speed up innovation
  • Advocates for domestic production of military tech components to avoid foreign dependency

Kamala Harris

  • Focused on ethical standards in defence applications, aiming to balance military advancements with AI regulations
  • Likely to continue Biden’s approach to cybersecurity, ensuring that military tech meets robust security standards
  • Supports collaborative defence tech R&D with allied nations for shared security initiatives
  • Emphasises transparency in AI-driven military tech to avoid misuse in international conflicts
  • Reportedly advocates for expanding the Defence Advanced Research Projects Agency (DARPA) funding for new military technologies

8. Telecommunications

 

Donald Trump

  • Plans to push rapid 5G development with minimal regulatory restrictions
  • Advocates for private-sector-led broadband and telecom infrastructure without federal intervention
  • Favours limited government involvement in net neutrality, leaving speed and pricing policies to ISPs
  • Supports SpaceX’s Starlink for expanding satellite internet in rural areas
  • Emphasises deregulation in data centre development to bolster telecommunications infrastructure

Kamala Harris

  • Supports federally funded 5G and broadband expansion, especially in rural and underserved communities
  • Likely to uphold net neutrality, viewing it as essential for digital equity across all income levels
  • Favours sustainable practices in telecom infrastructure, particularly in data centre operations.
  • Backs international collaboration on 5G standards and satellite connectivity
  • Advocates for subsidies to make internet access affordable nationwide, closing the digital divide

9. Blockchain and Cryptocurrency

 

Donald Trump

  • Branding himself as the “pro-crypto candidate”, a U-turn on his previous term, he favours tax incentives for blockchain and crypto investments
  • Supports minimal cryptocurrency regulation to position the US as a crypto hub
  • Likely to limit SEC regulations on crypto to encourage economic growth
  • Advocates free-market approaches in blockchain for supply chain transparency
  • Opposes strict federal oversight, favouring industry-led standards

Kamala Harris

  • Likely to support comprehensive regulation to secure crypto markets and protect consumers
  • Likely to promote a standardised regulatory framework for digital assets
  • Advocates international cooperation to create clear, unified crypto regulations
  • Emphasises the importance of consumer protection in blockchain technology
  • Supports blockchain integration in secure government systems

10. Emerging Technology

 

Donald Trump

  • Limited support for green tech, focusing more on energy independence through traditional resources
  • Favours deregulated smart cities and IoT initiatives, prioritising private sector-led infrastructure
  • Promotes quantum computing for national security, encouraging innovation with minimal intervention
  • Supports initiatives to drive innovation in manufacturing technology, emphasising American-made tech for industrial resilience
  • Emphasises AI-driven IoT applications for defence and security

Kamala Harris

  • Likely to introduce policies to encourage clean tech and sustainable practices across emerging industries
  • Advocates for green tech and sustainable solutions within smart cities and IoT infrastructure developments
  • Supports federal funding for quantum computing and emerging tech with commercial applications
  • Promotes ethical AI and blockchain use in healthcare to improve patient data security and efficiency
  • Supports the development of AI tools and applications for equitable access to educational resources and personalised learning

The post Trump vs Harris: Key tech policies in the US presidential election appeared first on TechInformed.

]]>
27237
Tech industry suggests Chancellor back start-ups, and warns about tax rises ahead of first Budget https://techinformed.com/uk-budget-labour-rachel-reeves-budget-tech-startups-tax-cybersecurity/ Tue, 29 Oct 2024 19:58:10 +0000 https://techinformed.com/?p=27079 UK Chancellor Rachel Reeves is set to unveil her first budget, promising a ‘new era for economic growth’ after a turbulent first few months in… Continue reading Tech industry suggests Chancellor back start-ups, and warns about tax rises ahead of first Budget

The post Tech industry suggests Chancellor back start-ups, and warns about tax rises ahead of first Budget appeared first on TechInformed.

]]>
UK Chancellor Rachel Reeves is set to unveil her first budget, promising a ‘new era for economic growth’ after a turbulent first few months in charge.

The budget proposed by Rachel Reeves and the Labour government, which took office in July after a snap election, highlights a £22 billion shortfall.

The shortfall has caused concern among businesses about potential tax increases and investment opportunities.

Many campaigners have argued that Rachel Reeves’ budget should prioritise developing the UK’s startup culture, promote university spin-outs, and provide increased support for scaling — especially for tech companies.

However, industry figures also warned that mooted tax changes, including Capital Gains Tax, could stymie growth.

Start-ups and Universities

 

The UK remains a leader in technological innovation, sitting squarely behind the US and China for global venture capital investment.

The UK tech industry is worth more than double Germany’s ($467bn) and three times that of France’s ($307bn).

But, the start-up picture still faces significant challenges around investment and scaling, talent, and encouraging entrepreneurship.

At a recent policy event in Westminster, Chi Onwurah MP, Chair of the Science, Innovation, and Technology Committee, discussed Rachel Reeves’ budget during a keynote speech.

She urged her Labour colleagues to support the UK’s innovation landscape and tech sector by backing universities.

Onwurah, who was the Shadow Minister for Science and Tech until the election, said: “There needs to be incentives and examples that we can set to inspire more entrepreneurial cultures in our universities.

“Many people have mentioned that so many of our talented young people study STEM subjects, but then go into other areas of the economy or culture.

“We should ensure that we are celebrating the influencers and celebrities that tech can provide and the cultural reference points for being a founder or an entrepreneur, as well.”

The event served as a platform for early-stage investor and philanthropist Dr. Ewan Kirk to present a series of policy proposals.

He argued that these proposals would prioritise innovation and entrepreneurship in the Government’s growth agenda, should Rachel Reeves decide to include them in her first budget.

Kirk has outlined several key policy points, including the introduction of a 10-year visa for all STEM graduates from UK universities.

He proposes replicating the UK’s research “Golden Triangle” — comprising Oxford, Cambridge, and London — in other regions of the country.

Additionally, he suggests creating delivery boards made up of industry experts to facilitate digital transformation in key sectors across the UK.

Kirk said: “The Government has rightly nailed its flag to the mast of economic growth, but the country is currently very strapped for cash. So, we need to see the delivery of cheap policies that can have an outsized impact. That is exactly what this policy is.

“Universities are education vehicles for producing and developing talent. The UK has one of the best higher education systems in the world, and it produces some of the best academics, innovators, and entrepreneurs in the world.

“But we’re voluntarily brain-draining ourselves by forcing this talent out once we’ve created it — through a combination of visa restrictions, expensive applications, and a wider hostile and difficult environment that we’ve created for foreign STEM students.

“Growth and productivity have flatlined for too long, and innovation and entrepreneurship are the levers we need to pull to do something about this woeful problem.

“This is precisely why we need to maintain and unleash the STEM talent emerging from our universities. This is a cheap and sensible policy — all it needs is the will to push it through.”

Read more: Labour’s next steps: HealthTech, GreenTech, and Startup industry leaders weigh in

Tax

 

During her campaign, Reeves assured that there would be no additional tax increases for “working people.” However, there is growing speculation that increases could affect areas like Capital Gains Tax (CGT) and Business Asset Disposal Relief (BADR) if included in Reeves’ budget.

Tech leaders told TechInformed that, if included in the budget, such rises could stifle innovation, especially in start-ups.

Tom Leathes, CEO and co-founder at Motorway warned: “Entrepreneurs thrive on incentives that reward risk. But I have a real concern that the looming Autumn budget could shift the balance, impacting growth and innovation in the UK’s technology ecosystem.

“Speculated plans to increase CGT rates or lower BADR will make scaling, selling, or reinvesting in new ventures significantly less attractive for many founders. Any barriers to founders starting and growing companies are going to hurt the UK in the long term.

“It’s also going to get harder for innovative tech companies to attract the talent they need to scale. As a tax on businesses (and therefore employees) increases, costs to employ the best talent go up.

“Incentivising employees with tax-efficient employee equity schemes is a powerful way to counter this, but the UK’s current startup equity plans are not good enough.

“We need the government to deliver policies that not only incentivise founders to get started but also help them incentivise the best talent to join them.

“Unless the government prioritises these points, we risk talent looking elsewhere to set up shop and scale. We should be boosting UK entrepreneurial spirit, not hindering it.”

CGT taxes the sale of an asset as opposed to general income and is variable based on income and “carried interest”. The amount taxed is based on the gained value, not the amount received during the sale.

Labour announces commitment to AI Bill in King’s Speech

Phil Kwok, CEO and co-founder of Easy A, cautioned that an increase in CGT could negatively affect scaling businesses and startups in two significant ways.

First, it would impact founders and early investors holding stock options, as they would be subject to the tax increase when selling their shares. This could penalise them for large sales or scaling their operations.

Additionally, he suggested that the rise in CGT could discourage investors, leading them to consider opportunities outside the UK tech scene in other markets.

“If the government sees tech as the bedrock of economic productivity and innovation, the approach seems clear — fostering an environment which encourages and nurtures these innovators so they can explore the full potential of all Web3 has to offer,” he added.

“Another area is support for young companies responsible for leading new tech innovations. Available capital for early-stage startups is low in the UK, with investors adopting a risk-averse approach.

“This is in contrast to the US, where early-stage startups can tap into investment and scale at pace. There are countless examples of renowned US startups whose rapid rise and success are down to the funding they secured early on.

“To be globally competitive, we need initiatives to support these young companies. We should take advantage of London’s reputation as a tech hub, which it has successfully built over the last two decades.”

Cybersecurity

 

As Rachel Reeves’ budget approaches, one key area for the Chancellor to consider is cybersecurity and making sure companies are positioned to invest in cyber-hygiene.

Reeves should adopt a “long-term view” in allocating resources to improve national cyber-defences, according to Barry O’Connell, GM of EMEA services at Trustwave.

“Plans to incentivise UK technology companies, enhanced business tax relief for cybersecurity investments, and injections of public funding into critical cybersecurity infrastructures and awareness programmes would be welcome moves to help boost the UK’s fortifications against evolving threats,” he added.

“Getting the basics right is the first critical priority, and implementing proper cyber-hygiene in an increasingly digital world requires Government intervention.

“This would be an important step to ensure that we are prioritising investments that protect the NHS and other public services, and the thousands of people consequently impacted, from devastating cyber harm.

“Overall, the Autumn Budget presents an opportunity for the government to reinforce its commitment to a secure and resilient digital infrastructure, which is essential for protecting sensitive data and maintaining public trust in digital services.”

Read more: Labour’s next steps: Cyber security, AI, & Open-Source industry leaders weigh in

The post Tech industry suggests Chancellor back start-ups, and warns about tax rises ahead of first Budget appeared first on TechInformed.

]]>
27079
EU strikes a blow against Apple and Google in landmark rulings https://techinformed.com/eu-rulings-apple-google-antitrust-taxes/ Wed, 11 Sep 2024 17:42:53 +0000 https://techinformed.com/?p=25733 In dual landmark rulings, the Court of Justice of the European Union (CJEU) found against both Apple and Google in cases regarding corporate tax avoidance… Continue reading EU strikes a blow against Apple and Google in landmark rulings

The post EU strikes a blow against Apple and Google in landmark rulings appeared first on TechInformed.

]]>
In dual landmark rulings, the Court of Justice of the European Union (CJEU) found against both Apple and Google in cases regarding corporate tax avoidance and market dominance abuse, respectively, issuing fines totalling more than €15 billion.

In the ongoing battle within the EU to regulate multinational corporations, led by European Union antitrust chief Margrethe Vestager, Apple has been ordered to pay Ireland €13bn ($14.4bn) in back taxes, while Google has been fined €2.4bn ($2.7bn) for antitrust violations.

Vestager, who has made a name for herself going after Big Tech’s tax arrangements within the EU, said in a post on X, “Today is a huge win for European citizens and tax justice.”

One bad Apple

 

The case against Apple goes back to 2016 when the European Commission accused the company of receiving illegal tax benefits from Ireland.

According to the Commission, Apple’s subsidiaries in Ireland paid a much lower tax rate than other companies — as low as 0.005% in 2014 — a practice that violated EU state aid rules.

The Irish government, however, sided with Apple, arguing that the arrangement was lawful, stating that its low corporate tax rate is an essential tool in attracting foreign investment.

In 2020, the General Court of the CJEU issued a judgement annulling the Commission’s case, but the Commission appealed the judgement, and the Court has now ruled its 2016 decision stands.

In an official statement after the latest judgement, the Irish Department of Finance said: “The Irish position has always been that Ireland does not give preferential tax treatment to any companies or taxpayers.”

Apple vehemently denied the European Commission’s accusations, insisting it complied with both US and Irish tax laws.

“This case has never been about how much tax we pay, but which government we are required to pay it to,” an Apple spokesperson said. “We always pay all the taxes we owe wherever we operate, and there has never been a special deal.”

The iPhone 16 manufacturer maintained that its income was already subject to taxation in the US and that the Commission was trying to rewrite the rules retroactively.

Despite this, the Court ruled in favour of the Commission, and Apple must now repay the taxes.

In Google, we antitrust

 

The case against Google dates back to 2017 when the European Commission fined the company for abusing its online shopping comparison market dominance.

According to the Commission, Google gave preferential treatment to its own comparison-shopping service, disadvantaging smaller rivals.

The fine was the EU’s largest antitrust penalty ever issued at the time, totalling €2.4bn ($2.7bn) — until 2018, when the EU fined Google €4.3bn ($4.75bn) for abusing the dominant position of its Android mobile operating system to promote Google’s search engine.

Google has consistently contested the EU’s decision, arguing that its practices improved the quality of its services for consumers.

The company adjusted its shopping service in 2017 to comply with the EU’s ruling but continued to appeal the fine.

In a statement, Google said of its adjustments: “Our approach has worked successfully for more than seven years, generating billions of clicks for more than 800 comparison shopping services.”

Despite these efforts, in its latest ruling, the Court solidified the Commission’s stance that Google abused its market position and that it was right to find Google’s conduct “discriminatory” and its appeal “must be dismissed in its entirety.”

Google faces another similar trial in the UK as a London court has argued that Google should pay £13.6bn in a lawsuit over whether it has too much influence on the online advertising market.

Who EU gonna call?

 

The cases were undoubtedly being closely observed across the EU as a significant moment for Big Tech’s European tax affairs — especially as the EU’s investigations between companies and member states have faced setbacks.

Just last year, Amazon successfully defended its tax arrangements in Luxembourg in a court battle, and the Commission similarly lost a case involving the Netherlands’ tax treatment of Starbucks, though it chose not to appeal.

The post EU strikes a blow against Apple and Google in landmark rulings appeared first on TechInformed.

]]>
25733
AI-generated disinformation poses threat to UK general election integrity, CETaS report finds https://techinformed.com/ai-generated-disinformation-threats-uk-general-election-2024/ Thu, 30 May 2024 08:16:22 +0000 https://techinformed.com/?p=22411 A report published by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) warns that AI-generated disinformation could be used to undermine democracy… Continue reading AI-generated disinformation poses threat to UK general election integrity, CETaS report finds

The post AI-generated disinformation poses threat to UK general election integrity, CETaS report finds appeared first on TechInformed.

]]>
A report published by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) warns that AI-generated disinformation could be used to undermine democracy ahead of the upcoming UK general election — and beyond.

The report, titled “AI-Enabled Influence Operations: The Threat to the UK General Election”, finds that while AI’s current impact on specific election results is limited, it poses broader risks to the democratic system.

These include a “degraded and polarised information space” and online harassment through deepfakes. AI could enhance these kinds of threats across the various stages of the UK general election cycle.

The report found that only 19 of 112 national elections since January 2023 showed AI interference. It also found no clear evidence that election results significantly differed from polling data.

However, the confusion created by AI-generated content has damaged trust in online sources, according to the report. Deepfakes have incited online hate against political figures, and politicians could exploit AI disinformation for electoral gain.

Earlier this year, London Mayor Sadiq Khan called for a crackdown on disinformation after a deepfake audio of his voice making inflammatory remarks before the UK’s Remembrance Weekend commemorations was leaked.

Though examples of AI misuse are scarce, they are often amplified through mainstream media, inflating public anxieties about AI’s threat to electoral processes.

The report is the first of two CETaS publications on AI and election security — the second will be published in September — and identifies three categories of election security threats.

Campaign threats that aim to manipulate voter behaviour or attitudes toward candidates or political issues; information threats that seek to undermine the quality of the information environment, confuse voters, and damage the integrity of electoral outcomes; and infrastructure threats target the systems and individuals responsible for securing election processes — using tactics such as ‘hack and leak‘ operations and AI-generated phishing emails against election officials.

The report stresses the urgency of addressing ambiguous electoral laws on AI use, which both domestic and foreign actors could exploit. For example, political parties might misuse AI to fabricate campaign endorsements, undermining the election process.

However, with the UK general election set for 4th July, there is limited time to enhance election security protections.

According to the report, “ambiguous electoral laws on AI use during elections are currently resulting in misuse.”

The Labour Party has expressed concerns about social media platform X’s refusal to remove deepfake audio clips of party leader Sir Keir Starmer from October 2023. Some of these clips have received 1.5 million views.

 

Brighton, UK, 29/09/21: Sir Keir Starmer giving a speech to the Labour Party Conference — UK general election
Labour Party leader Keir Starmer giving a speech to the Labour Party Conference in 2021

 

As such, CETaS calls for immediate actions, including setting more explicit expectations on AI use for political parties and media organisations and analysing data from recent UK local elections to inform contingency planning.

Further suggestions include issuing ‘fair AI use’ guidelines and voluntary agreements for political parties, supporting media with AI threat reporting tools, and launching public AI awareness campaigns.

CETaS has developed a timeline mapping potential AI threats to the UK general election and corresponding countermeasures.

The timeline shows when threats will likely emerge and what outcomes they aim to achieve from pre- to post-election. It’s based on evidence from recent elections and academic literature and estimates the time windows for interventions to mitigate these threats.

The report underscores a pressing need for a coordinated, “whole-of-government” approach to safeguard the integrity of the upcoming UK general election.

With the election date rapidly approaching, the window for implementing robust security measures is narrowing. CETaS concludes that immediate action is essential to mitigate AI-generated threats and ensure public trust in the democratic process.

The post AI-generated disinformation poses threat to UK general election integrity, CETaS report finds appeared first on TechInformed.

]]>
22411
Overregulating AI will lead to start-ups “dying on the beach” https://techinformed.com/global-ai-regulation-eu-ai-act-insights-sim-conference-porto-2024/ Thu, 23 May 2024 15:10:38 +0000 https://techinformed.com/?p=21549 Governments worldwide are eyeing the opportunity offered by AI technologies, but concerns linger around regulation. The US, UK, China, and the European Union are all… Continue reading Overregulating AI will lead to start-ups “dying on the beach”

The post Overregulating AI will lead to start-ups “dying on the beach” appeared first on TechInformed.

]]>
Governments worldwide are eyeing the opportunity offered by AI technologies, but concerns linger around regulation. The US, UK, China, and the European Union are all looking to position themselves at the forefront of AI, as the EU AI Act is finally green-lit.

Earlier this month, at the inaugural SIM Conference in Porto, Portugal, a panel of experts — moderated by European Parliament cabinet member Catarina Peyroteo Saltier — outlined the evolving regulatory landscape in front of a dimly lit room full of tech-oriented minds.

With the EU AI Act set to shape the future of enterprise technology on the continent, here are TI’s takeaways from the “Regulatory Impact on AI: Doing No Significant Harm” panel.

A divided vision of AI’s future

 

As is the case with many topics, the diverse political structure within the EU has led to ideological divides within the European Parliament, according to Kai Zenner, head of office and digital policy adviser for Member of European Parliament (MEP) Axel Voss.

“50% have indicated already that they are rather afraid of this new technology, all of its new possibilities and so on — making references to the social benefits scandal in the Netherlands,” he explained.

The scandal in question refers to the Dutch tax authority’s use of an algorithm to detect benefits fraud in 2013. It led to thousands of families, particularly those with lower incomes or from ethnic minorities, being unjustly penalised, resulting in poverty, suicides, and over a thousand children being taken into foster care.

“On the other side, 50% are really trying to foster AI development and saying we can use AI to make the world a better place and fight against climate change,” he said.

This divide reflects broader global debates on the balance between harnessing AI’s potential for good and mitigating its risks.

 

Kai Zenner discussing the EU AI Act on stage at SIM Conference in Porto, Portugal, 2024.
Panel at SIM Conference 2024 in Porto. Pictured left to right: Catarina Peyroteo Saltier, Kai Zenner, Manuel Caldeira Cabral

 

Zenner, who is also part of a ‘network of experts’ supporting the UN Secretary-General’s ‘High-Level Advisory Body on AI’, pointed out that the dynamic nature of AI creates further challenges for regulators: “When the commission was coming out with their original proposal for the AI Act, it was already outdated.”

“The commission was not really thinking [about ChatGPT and foundation models] when they prepared the AI Act in 2019/20,” he added. “In the European Union, there was a big push to finish the Act in time for the elections coming up. I think it would have been better if we took a little bit more time.”

This comes after the European Parliament gave the AI Act the final green light this week, making it the world’s first major law regulating AI.

National quests for innovation

 

European countries such as France, Spain, and Portugal have begun creating their own national AI strategies. However, according to Saltier, the EU lags behind in AI innovation compared with other regions.

When asked how national governments can address the EU’s shortcomings, Manuel Caldeira Cabral, former Portuguese minister of economy and current deputy of the Portuguese Assembly of the Republic, clarified which side of the fence he sits on.

He called on national governments to prioritise AI’s opportunities, especially those related to enhancing public services and fostering competitive markets: “As a professor, I can’t say to my students, ‘You can’t use AI. You have to work like it’s the Middle Ages, writing it all out.’” he said.

“National strategies shouldn’t be about regulating to avoid the dangers of AI down the road; it has to be about the opportunities and how to use them to grow faster, implement within new areas, and create better services for the people,” he added.

He suggested that the EU take a positive outlook on AI, advocating for regulations that foster growth and innovation rather than stifle it with overbearing restrictions.

“The consensus about having the best possible legislation, better than the US or China, has led us to a situation where most of the data-intensive start-ups grow faster in the US or China than they manage to in the EU.”

He warned that if this continues, EU firms won’t be able to keep up and will be bought up by firms in the US or China. He added that this could leave EU data vulnerable to foreign businesses, and we would have to trust that they won’t use it nefariously.

Regulatory Sandboxes

 

Zenner discussed how regulatory sandboxes could bridge the gap between regulation and innovation, explaining how they could “play a major role in enabling SMEs and start-ups to get compliant by entering a very close dialogue with the regulators and enforcers. I think that will help and give them a competitive edge.”

 

What is a regulatory sandbox?

According to the European Parliament, while there is no agreed definition, regulatory sandboxes generally refer to tools that allow businesses to experiment with innovative products, services, or business models under the supervision of a regulator for a limited period. This setup is intended to help companies innovate faster by reducing the usual regulatory hurdles while ensuring that consumer protection and system integrity are maintained.

Over recent years, the sandbox approach has gained traction across the EU as a means of helping regulators address emerging technologies such as AI and blockchain. Whilst predominantly used in the fintech sector, sandboxes have also emerged in other sectors like transport, energy, telecoms, and health to test innovations like autonomous cars, smart meters, 5G deployment, and predictive health technologies.

 

Caldeira Cabral added: “In financial services — which is quite a sensible area — in Portugal, we have worked with firms to help them comply with the regulations instead of waiting for them to do the things they know they should. We help them make things better in a way that produces better results for the community without stopping them or dragging them back. Dragging them back leads to them dying on the beach.”

His sentiment clearly resonated, eliciting applause from a few audience members.

International cooperation and the start-up ecosystem

 

Luther Lowe, head of public policy at start-up accelerator Y Combinator, joined the panel later.

He brought his Silicon Valley perspective to the discussion, highlighting the global nature of start-ups and the importance of international cooperation in AI development.

“Every year, we fund about 500 companies. About half of those are AI businesses. When you’re a founder, and you’re looking to identify where to put your flag, you want someplace that’s not going to require you to undertake something too burdensome,” he said.

Lowe commented on recent legislative developments in California, which echo elements of the EU AI Act, pointing to a growing consensus on the need for ethical and safe AI development practices.

 

Manuel Caldeira Cabral and Luther Lowe on a panel at SIM Conference 2024 in Porto.
Panel at SIM Conference 2024 in Porto. Pictured left to right: Manuel Caldeira Cabral, Luther Lowe

 

However, he also cautioned against regulations that could stifle small-scale innovation.

“I think it has given some pause to some of the VCs and developers. It’s still very early, but I think we want to ensure we’re protecting open-source development.” He continued, “For example, if I’m tinkering on a small company and exploring how to build something new with generative AI, I don’t want to have to register with the government for some marginal project.”

Caldeira Cabral supported Lowe’s position, citing that centralised regulations were more effective than each European country creating their own bespoke regulations.

“Having rules is a good thing for firms because they know what they can and can’t do — but not overregulating, or this idea of having licences for each of the 27 countries in the EU. We really don’t want to say to start-ups, ‘If you don’t want to adhere to 27 different licenses from different governments, you’d better go somewhere else.’”

When asked what he was most excited about in terms of regulation from Europe, Lowe mentioned the Digital Markets Act (DMA): “If you think about the ability of a law to curb the self-preference of the gatekeepers and introduce a lot more oxygen into the markets, that’s going to unlock a lot of opportunity,” he said.

 

What is the Digital Markets Act?

The Digital Markets Act Regulation 2022 is an EU regulation that aims to create a fairer, more competitive digital economy. It came into effect on 1 November 2022 and became mostly applicable on 2 May 2023.

The DMA seeks to promote increased competition in European digital markets by preventing large companies from abusing their market power and by enabling new players to enter the market. This regulation specifically targets the largest digital platforms operating in the European Union, commonly referred to as “gatekeepers” due to their dominant market position in certain digital sectors and their fulfilment of specific criteria related to user numbers, turnover, or capitalisation.

In September 2023, the EU identified twenty-two services across six companies (deemed “gatekeepers”) — Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft — as “core platform services.” These companies were given until 6 March 2024 to comply with all of the Act’s provisions.

 

Caldeira Cabral spoke of the difficulties of having multiple sets of regulations for different regions: “I think we have to be realistic about what we can impose onto our firms, what we can impose onto the world, and how we negotiate with the world.

“I don’t know if the United States wants to negotiate with the European Union. Today? Yes, if we are reasonable. But by the end of the year, I don’t know what kind of America they’re going to have — and China may negotiate everything but then do whatever they want anyway.”

“This doesn’t mean that we should have no rules, but we should be careful about the side effects of having too many rules.”

TI:TALKS weekly podcast by TechInformed

The post Overregulating AI will lead to start-ups “dying on the beach” appeared first on TechInformed.

]]>
21549
Global AI regulation efforts prompt US-China talks https://techinformed.com/global-ai-regulation-efforts-prompt-us-china-talks/ Tue, 14 May 2024 10:11:07 +0000 https://techinformed.com/?p=21259 The US and China are to convene in Geneva today to deliberate on the risks associated with artificial intelligence and address its multifaceted challenges —… Continue reading Global AI regulation efforts prompt US-China talks

The post Global AI regulation efforts prompt US-China talks appeared first on TechInformed.

]]>
The US and China are to convene in Geneva today to deliberate on the risks associated with artificial intelligence and address its multifaceted challenges — particularly in security and ethical governance.

The meeting, between US Secretary of State Antony Blinken and China’s Foreign Minister Wang Yi, seeks to mitigate misunderstandings and foster a constructive exchange on AI’s implications for global security.

The US has been vocal about its concerns over China’s rapid AI advancements, emphasising the need for direct communication to safeguard its interests and those of its allies.

Despite the competitive undercurrents, both nations appear to recognise the potential benefits of establishing universal AI norms.

Michal Szymczak, head of AI strategy at software consultancy Zartis, said the talks between the two opposing nations were significant.

“While the White House has made it clear it’s not willing to budge on AI policies, any chance to introduce any parity on AI regulation between China and the West should be explored and are most welcome.”

 

A coffee with... Debasis Satpathy, CBO, Fosfor — Data Management

 

According to Szymczak,  the quality of AI products and considerations like data privacy, intellectual property, and fairness will be crucial in determining consumer and enterprise preferences.

“For China to successfully export its AI technology to Western markets, it must ensure its products adhere to stringent local regulations to circumvent potential sanctions — similar to the challenges faced by companies like TikTok and Huawei.”

“These factors will likely influence consumer and enterprise decisions, with preferences leaning towards services that offer comprehensive, long-term support in these areas,” he said.

The Geneva talks represent a critical step in navigating the complex landscape of AI governance. However, Szymczak predicts a lack of trust will be a cause for concern.

“It is likely that nations will safeguard their strategic sectors, such as healthcare and energy, from foreign AI service providers until the technology matures and reliable safeguards are established,” he said.

He concludes on the privacy issue: “Significant concerns remain regarding the control users have over their data, and the scope of data collection, which could be pivotal in determining the success of AI solutions as consumers and governments become increasingly aware of and concerned about these issues.”

Stay informed with the latest tech updates and news analysis with TechInformed

The post Global AI regulation efforts prompt US-China talks appeared first on TechInformed.

]]>
21259