Is the EU Leading the Charge or Losing the Race in Regulating AI?

Is the EU Leading the Charge or Losing the Race in Regulating AI?

As I sit down to reflect on the European Union’s emerging AI regulatory framework, I can’t help but feel a mix of admiration and unease. The EU is charting a bold course, aiming to classify AI tools based on their potential risks and impose stricter rules on high-risk systems like self-driving cars and medical technologies, while giving more leeway to lower-risk applications like internal chatbots.

As someone who has spent years covering the intersection of technology and policy, I’ve seen the transformative power of innovation and the chaos that can ensue when it’s left unchecked. The EU’s approach feels like a necessary step toward ensuring AI remains trustworthy and aligned with human values, but I worry it might come at the cost of stifling the very creativity it seeks to protect. This isn’t just a European issue—it’s a global one, and the world is watching closely.

The EU’s AI Act, which took effect in August 2024, is a groundbreaking piece of legislation, the first of its kind to tackle AI governance on such a comprehensive scale. The European Commission has divided AI systems into four risk categories: unacceptable, high, limited, and minimal. High-risk systems, like those used in healthcare or law enforcement, face rigorous requirements, including mandatory safety checks and detailed documentation. For instance, AI tools in medical devices must meet strict standards to ensure they don’t endanger patients, a move that reflects the EU’s deep commitment to safeguarding fundamental rights, as outlined in the official documentation of the AI Act. On the other hand, lower-risk systems, such as chatbots used within companies, are subject to lighter regulations, allowing businesses to innovate without being bogged down by red tape. It’s a thoughtful, risk-based approach designed to strike a balance between fostering innovation and protecting citizens.

I can’t help but admire the EU’s ambition here. Growing up in a world where technology often seemed to outrun regulation, I’ve seen the consequences of letting innovation run wild—data breaches, biased algorithms, and the erosion of privacy. The EU’s General Data Protection Regulation (GDPR), implemented back in 2018, set a global standard for data privacy, inspiring similar laws in places like Brazil and California. Over 130 countries have adopted data protection laws influenced by the GDPR, proving that the EU has the power to shape global norms. The AI Act could follow in its footsteps, becoming the go-to model for AI regulation worldwide. For companies operating in or targeting the European market, compliance isn’t just a legal checkbox—it’s a strategic necessity. Getting ahead of these rules could save businesses from costly last-minute scrambles and bolster their reputation as ethical innovators.

But there’s a catch, and it’s a big one. Critics worry that the EU’s regulatory zeal could backfire, particularly for smaller companies and startups. The European Commission estimates that compliance costs for high-risk AI systems could amount to €400,000 per system, depending on the complexity and scale. For small and medium-sized enterprises (SMEs), which make up 99% of all businesses in the EU and employ nearly 100 million people, these costs could be dealbreakers. I’ve spoken to entrepreneurs who fear they’ll be priced out of the European market or forced to abandon their AI projects altogether. If regulations push these smaller players away, Europe risks losing its competitive edge in a global AI race that’s heating up fast.

And then there’s the broader global context. While the EU is busy crafting its regulatory masterpiece, other major players like the United States and China are taking very different paths. The U.S., under President Donald Trump, has embraced a more hands-off approach, relying on voluntary guidelines and industry self-regulation. Meanwhile, China is pouring resources into AI development, with companies like DeepSeek emerging as global leaders. Analysts estimate that AI technology could bring $600 billion annually for China’s economy, fuelled by government support and a regulatory environment that’s far less restrictive than the EU’s. The third Artificial Intelligence Action Summit in Paris, held in February, highlighted these stark contrasts, with world leaders and tech executives grappling with how to regulate AI without losing ground to less regulated markets. China’s DeepSeek app, for example, which can self-train on coding and math problems, has only intensified these concerns, raising questions about whether the EU’s approach might leave it playing catch-up.

The EU’s AI Act also comes at a time when the AI landscape is evolving rapidly, with trends like AI-driven search snippets and workplace automation reshaping industries. Take Google’s AI Overviews, for example. A 2024 analysis by Seer found that these snippets, which provide answers directly on the search page, are reducing click-through rates for many businesses. While this is great for users who get quick answers, it’s a headache for companies that rely on organic traffic. On the workplace front, McKinsey’s 2024 report, “Superagency in the Workplace,” argues that AI can boost productivity and creativity but only if companies invest in training employees to collaborate with these tools. The report found that organizations that prioritize people-centric AI strategies—offering practical training, clear communication, and ethical guidelines—saw productivity gains. These insights suggest that regulation alone isn’t enough; success depends on how well organizations and societies adapt to AI’s potential.

Yet, for all the challenges, there’s a compelling case to be made for the EU’s approach. Proponents argue that well-crafted regulations can build trust and encourage responsible development. The AI Act’s focus on transparency, such as requiring developers to disclose details about their training data, resonates with growing public demand for accountability. 68% of Europeans want government restrictions on AI, citing concerns about privacy, bias, and job displacement. By addressing these issues head-on, the EU could position itself as a global leader in ethical AI, attracting businesses and consumers who value trust and safety. And let’s not forget the EU’s track record with the GDPR, which showed that robust regulation can coexist with innovation if it’s done right—thoughtfully, collaboratively, and with a clear eye on the bigger picture, as evidenced by its widespread global influence.

So, where does that leave us? As I see it, the EU’s AI regulatory framework is a bold and necessary experiment, one that reflects the bloc’s commitment to putting people first in an increasingly tech-driven world. But its success hinges on finding the right balance—encouraging innovation without sacrificing accountability and protecting rights without stifling growth. For businesses, the message is clear: don’t wait to adapt. Staying informed and preparing early could make all the difference, both in terms of compliance and reputation. For the EU, the challenge is even greater: to lead with vision, flexibility, and a willingness to learn from the global AI race. As a journalist, I’m cautiously optimistic, but I’ll be watching closely to see whether this framework becomes the global benchmark it aspires to be—or a cautionary tale of good intentions gone awry.

 

 

Source: https://intpolicydigest.org/is-the-eu-leading-the-charge-or-losing-the-race-in-regulating-ai/

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j

How the EU is regulating crypto-assets with MiCAR and why you should care

How the EU is regulating crypto-assets with MiCAR and why you should care

The EU has recently adopted the Markets in Crypto-Assets Regulation (MiCAR). This groundbreaking legislation aims to provide a clear and consistent framework for regulating crypto-assets and related services in the EU. MiCAR will apply from the end of 2024, with some provisions applying from mid-2024.

MiCAR defines crypto-assets as “a digital representation of value or rights which may be transferred and stored electronically, using distributed ledger technology or similar technology.” This definition covers various types of crypto-assets, such as cryptocurrencies, tokens, stablecoins, and non-fungible tokens (NFTs). It excludes crypto-assets already regulated under existing EU financial services legislation, such as financial instruments, deposits, electronic money, or insurance products. I agree with this definition, as it is broad and neutral enough to capture the diversity and innovation of crypto-assets while also respecting the existing regulatory frameworks for other types of assets.

Furthermore, it classifies crypto-assets into three main categories: e-money tokens (EMTs), asset-referenced tokens (ARTs), and other tokens. EMTs are crypto-assets pegged to one official currency, such as Tether or USD Coin. ARTs are crypto-assets backed by a pool of assets, such as fiat currencies, commodities, or other crypto-assets. Other tokens are crypto-assets that have various purposes and characteristics, such as utility tokens, payment tokens, or governance tokens.

As mentioned above, MiCAR also introduces the concept of significant tokens for EMTs and ARTs, which are subject to additional requirements due to their potential impact on financial stability or monetary policy. The European Banking Authority (EBA) will identify and monitor significant tokens based on criteria such as the number of users, transaction values, interconnectedness with the financial system, or innovation or complexity of the token. I think this classification is reasonable and valuable, as it reflects the different functions and risks of crypto-assets while also allowing for some flexibility and adaptation. Personally, when I spoke to EU-based bankers who are considering ESG-related crypto funds, they mentioned that MiCAR should also consider the environmental and social impact of crypto-assets, especially those that consume a lot of energy or resources or those that may affect human rights or privacy. I did not comment on that, but I am well aware of their “crypto agenda”. Additionally, I also think that they should actively involve other stakeholders, such as consumers, investors, or developers, in identifying and monitoring significant tokens, as they may have valuable insights and feedback.

MiCAR imposes different authorization and supervision requirements for crypto-asset issuers and crypto-asset service providers (CASPs), depending on the type and significance of the crypto-asset. Crypto-asset issuers offer crypto-assets to the public or seek their admission to trading on a trading platform for crypto-assets. CASPs provide or perform services or activities related to crypto-assets, such as custody, exchange, execution, advice, or portfolio management. Crypto-asset issuers of EMTs and ARTs must obtain authorization from the competent authority of their home member state before offering or admitting such tokens to trading. They must also prepare and publish a white paper that discloses essential information about the crypto-asset project, such as the features, rights, and obligations of the crypto-asset, the risks and costs involved, the governance and technical arrangements, and the identity and contact details of the issuer. Do note that they do not need authorization but must comply with the white paper requirement and other general obligations.

CASPs must obtain authorization from the competent authority of their home member state before providing or performing any crypto-asset services or activities. They must also comply with prudential requirements, the conduct of business rules, safeguarding requirements, and anti-money laundering and counter-terrorism financing (AML/CTF) obligations. I support these requirements, as they aim to ensure the transparency, accountability, and responsibility of crypto-asset issuers and CASPs and protect the interests and rights of consumers, investors, and the public. On top of this, I think that MiCAR should also provide some incentives and benefits for crypto-asset issuers and CASPs that comply with these requirements, such as lower fees, faster processing, or broader access. I also think that MiCAR should promote cooperation and coordination among the competent authorities of different member states and other international regulators and organizations to avoid duplication, inconsistency, or conflict.

MiCAR also provides some transitionary provisions and exemptions for crypto-asset issuers and CASPs already operating in the EU before the application date of MiCAR. For example, those authorized or registered under national regimes in one or more member states may continue to operate in those member states until mid-2025 without obtaining authorization under MiCAR. However, they must comply with the relevant national rules and regulations and apply them by mid-2024 if they wish to operate in the EU after mid-2025.

They also established a pilot regime for distributed ledger technology (DLT) market infrastructures, which are a new type of market participants that use DLT to provide trading and settlement services for crypto-assets that qualify as financial instruments. The pilot regime aims to test the use of DLT in trading and post-trading crypto-assets while ensuring high investor protection and market integrity. The pilot regime will apply for five years from the application date of MiCAR, with a possibility of extension. These provisions are good in my opinion, as they recognize the diversity and maturity of the existing crypto-asset market in the EU and can provide a smooth and gradual transition to the new regulatory framework. They should also ensure a fair and equal treatment of all crypto-asset issuers and CASPs, regardless of origin, size, or status, and avoid creating undue advantages or disadvantages for some over others. If they can encourage and support the participation and experimentation of different actors and stakeholders in the pilot regime, such as incumbents, newcomers, or innovators, and foster a collaborative and inclusive environment for the development and adoption of DLT. This will be a big plus for them.

MiCAR does not apply to crypto-assets issued or guaranteed by central banks, member states, third countries, or public international organizations. It also does not apply to crypto-asset services or activities provided or performed by central banks or other public authorities in performing their public tasks or functions. These exemptions aim to preserve the monetary sovereignty and policy of the EU and its member states and facilitate the development of central bank digital currencies (CBDCs) and other public initiatives in the crypto-asset space. While I understand these exemptions, as they reflect the special and privileged status of central banks and public authorities and their role and responsibility in the monetary and financial system. However, I think MiCAR should also ensure a close and constructive dialogue and cooperation between the public and the private sectors and foster a balanced and complementary relationship between the traditional and innovative forms of money and finance. I also think that MiCAR should monitor and assess the impact and implications of CBDCs and other public initiatives on the crypto-asset market and address any potential issues or challenges that may arise.)

I also want to highlight that there are also some implications for investment firms and the travel rule, which are relevant to the crypto-asset market. Investment firms are those who provide or perform investment services or activities on a professional basis, such as execution of orders, portfolio management, or investment advice. The travel rule is a requirement that obliges financial institutions to exchange certain information about the originator and the beneficiary of a funds transfer, such as their names, addresses, account numbers, and transaction amounts.

They allow investment firms that are authorized under the Markets in Financial Instruments Directive 2014/65/EU (MiFID II) to provide or perform crypto-asset services or activities in relation to crypto-assets that qualify as financial instruments without obtaining additional authorization under MiCAR. However, they must comply with the relevant MiFID II rules and regulations, as well as some specific requirements under MiCAR, such as the safeguarding and AML/CTF obligations. Investment firms that wish to provide or perform crypto-asset services or activities concerning crypto-assets that do not qualify as financial instruments must obtain authorization and comply with its rules and regulations.

The travel rule applies to crypto-asset transfers, which are any transactions resulting in the change of ownership of one or more crypto-assets from one person to another. MiCAR requires CASPs that are involved in crypto-asset transfers to exchange certain information with other CASPs, such as the name and account number of the originator and the beneficiary, the amount and type of crypto-asset transferred, and the date and time of the crypto-asset transfer. The CASPs must ensure that the information is accurate, complete, secure, and confidentially transmitted. They must also keep records of the information for at least five years. They must implement the travel rule by mid-2024, the same date as applying the Financial Action Task Force (FATF) standards on virtual assets and virtual asset service providers.

They aim to establish a level playing field and a single market for crypto-assets and related services within the EU. This is achieved by harmonizing and simplifying the current national regulatory frameworks, thereby eliminating regulatory fragmentation and uncertainty. They also acknowledge the need for a degree of regulatory flexibility and discretion at the national level, which opens the door to regulatory arbitrage and competition among EU member states in specific areas. Some of the leading EU jurisdictions for MiCAR compliance and regulatory arbitrage are France, Germany, and Malta. These jurisdictions have already adopted national regimes for crypto-assets and related services, which are solid, flexible, favorable, attractive, and clear and consistent. They also have supportive and innovative regulators, such as the AMF, BaFin, and MFSA, which have issued several guidance and recommendations on crypto-assets and related services. They also have robust and diversified crypto-asset ecosystems, with several established and emerging players. These jurisdictions are likely to maintain and enhance their leading positions in the crypto-asset market under MiCAR, as they have a competitive edge and a first-mover advantage over other member states.

To sum up, MiCAR is a landmark legislation shaping the future of crypto-assets in the EU. It will introduce legal certainty, consumer protection, market integrity, and financial stability and foster innovation and competition by enabling cross-border activities and passporting rights for crypto-asset issuers and CASPs within the EU.

They are visionary and ambitious legislation that reflects the importance and potential of crypto-assets and related services and that responds to the needs and expectations of the crypto-asset community and society at large. It is also a complex and dynamic legislation that requires constant monitoring and evaluation and may face some difficulties and uncertainties in its application and enforcement. I hope that MiCAR will be able to adapt and evolve with the changing and growing nature of crypto-assets and related services and that it will be able to achieve its objectives and benefits.

I look forward to seeing the development and implementation of this framework, and I hope it will contribute to the growth and maturity of the crypto-asset industry in the EU and beyond.

 

Source: https://www.financialexpress.com/business/digital-transformation-how-the-eu-is-regulating-crypto-assets-with-micar-and-why-you-should-care-3434243/

 

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j

What the EU Gets Right with its New AI Rules

What the EU Gets Right with its New AI Rules

The European Union’s latest effort to rein in artificial intelligencethe AI Act, marks a pivotal step towards regulating a technology that is as pervasive as it is potent. With its public unveiling on January 21, the Act lays a framework that seeks to harness AI’s capabilities while safeguarding the fundamental tenets of trust, ethics, and human rights.

As we unpack the Act’s dimensions, we will weigh its merits against its potential impediments to the trajectory of AI, not just within the confines of Europe but as a precedent for the global stage. The discourse around this groundbreaking legislation is as much about its current form as it is about the dialogue it engenders concerning the future interplay of artificial intelligence with our societal mores and economic frameworks.

Does it strike the right balance?

The AI Act introduces a risk-based regulatory schema, categorizing AI systems into unacceptable, high-risk, limited-risk, and minimal-risk. The Act prohibits ‘unacceptable risk’ AI systems, such as manipulative social scoring and covert emotional manipulation, to protect individual rights. ‘High-risk’ AIs, pivotal in healthcare, education, and law enforcement, face rigorous requirements including human oversight. ‘Limited-risk’ AIs, like chatbots, must disclose their AI nature to users. Lastly, ‘minimal-risk’ AIs, like video games, have minimal regulatory constraints, promoting innovation while safeguarding against abuses.

The AI Act is crafted with the dual goals of fostering technological innovation and upholding fundamental rights. The Act’s targeted regulatory focus seeks to minimize undue burdens on AI practitioners by emphasizing the control of applications with the most potential for harm. However, it is not without its detractors. Critics point to its ostensibly broad and ambiguous language, which may leave too much open to interpretation, potentially leading to legal uncertainties.

The Act’s broad definition of AI as a technology-neutral concept, its reliance on subjective terminology like “significant” risk, and the discretionary power it affords to regulatory bodies are seen as potential stumbling blocks, raising concerns over possible inconsistencies and confusion for stakeholders within the EU’s digital marketplace.

A significant challenge the EU’s AI Act faces is ensuring consistent enforcement across all member states. To address this, the Act constructs an elaborate governance structure that includes the European Artificial Intelligence Board and national authorities, bolstered by bodies responsible for market surveillance. The Act stipulates robust penalties for non-compliance, including fines of up to 7% of global annual turnover. Beyond punitive measures, it emphasizes the role of self-regulation, expecting AI entities to undertake conformity assessments and maintain risk management protocols. The Act also recognizes the importance of global cooperation, considering the divergent AI regulatory landscapes outside the EU.

The efficacy of the Act will ultimately hinge on the collective engagement and adherence of all parties to its stipulated frameworks.

Some pros and cons of the AI Act

The AI Act directly addresses the burgeoning field of advanced technologies, focusing on generative AI, biometric identification, and the nascent realm of quantum computing. These technologies hold transformative potential across diverse sectors including healthcare, education, entertainment, security, and scientific research.

Yet, with great potential comes a spectrum of challenges, particularly concerning ethical issues like bias and discrimination, as well as concerns over privacy, security, and accountability. The Act confronts these challenges head-on by instituting rules and obligations tailored to specific AI categories. For instance, generative AI systems — which can create new, diverse outputs such as text, images, audio, or video from given inputs — must adhere to stringent transparency obligations. This is particularly pertinent as generative AIs like ChatGPT and DALL-E find broader applications in content creation, education, and other domains.

The Act acknowledges the potential for malicious use of generative AI, such as spreading disinformation, engaging in fraudulent activities, or launching cyberattacks. To counteract this, it mandates that any AI-generated or manipulated content must be identifiable as such, either through direct communication to the user or through built-in detectability. The goal is to ensure that users are not deceived by AI-generated content, maintaining a level of authenticity and trust in digital interactions.

Additionally, the Act requires AI systems that manipulate content to be designed in such a way that their outputs can be discerned as AI-generated by humans or other AI systems. This provision aims to preserve the integrity of information and preclude the erosion of factual standards in the digital age.

The AI Act is intentionally crafted to harmonize technological progress with the protection of foundational societal norms and values. The Act’s efficacy is predicated on the meticulous application of these regulations, keeping pace with the rapid development of AI technologies.

Turning to biometric identification systems, these tools are capable of recognizing individuals based on unique physical or behavioral traits such as facial features, fingerprints, voice, or even patterns of movement. While they offer enhancements in security, border management, and personalized access, they simultaneously raise substantial concerns for individual rights, including privacy and the presumption of innocence.

The Act specifically addresses the sensitive nature of biometric identification, incorporating stringent controls over its deployment. It notably restricts the use of real-time biometric identification systems in public areas for law enforcement, barring a few exceptions where the circumstances are critically compelling — such as locating a missing child, thwarting a terrorist threat, or tackling grave criminal activity.

In cases where biometric techniques are employed for law enforcement, the Act mandates prior approval from an independent authority, ensuring that any use is necessary, proportionate, and coupled with human review and protective measures. This regulatory stance underlines a commitment to uphold civil liberties even as we advance into an era of increasingly sophisticated digital surveillance tools.

Harnessed from the enigmatic realm of quantum physics, quantum computing emerges as a technological titan capable of calculations that dwarf the prowess of traditional computers. With the power to sift through vast data and unlock solutions to hitherto intractable problems, its potential spans the spectrum from cryptography to complex simulations, and from optimization to machine learning. Yet, this same capability ushers in novel risks: the crumbling of current cryptographic defenses, the birth of unforeseen security breaches, and the potential to tilt global power equilibria. The European Union’s AI Act, while not directly addressing quantum computing, encompasses AI systems powered by such quantum techniques within its regulatory embrace, mandating adherence to established rules based on the assessed risk and application context. Moreover, the Act presciently signals the need for persistent exploration and innovation in this sphere, advocating for the creation of encryption that can withstand the siege of quantum capabilities.

The Act’s influence on the vanguard of technology is paradoxical. It affords a measure of predictability and a compass for AI practitioners and end-users alike, weaving a safety net for the digital citizenry. Conversely, it may erect hurdles that temper the speed of AI progress and competitive edge, leaving a mist of ambiguity over the governance and stewardship of AI. The true measure of the Act’s imprint will reveal itself in the finesse of its enforcement, its interpretative flexibility, and its dance with the ever-evolving tempo of AI innovation.

Ethical considerations

The ethical tapestry of the AI Act is rich and intricate, advocating for an AI that is at once robust, ethical, and centered around human dignity, reflecting and magnifying the EU’s core values. It draws inspiration from the Ethics Guidelines for Trustworthy Artificial Intelligence, which delineate seven foundational requirements for the ethical deployment of AI, from ensuring human agency to nurturing environmental and societal flourishing. These principles are not merely aspirational; they are translated into tangible and binding mandates that shape the conduct of AI creators and users.

This ambitious ethical framework, however, does not come without its conundrums and concessions. It grapples with the dynamic interplay of competing interests and ideals: the equilibrium between AI’s boon and bane, the negotiation between stakeholder rights and obligations, the delicate dance between AI autonomy and human supervision, the reconciliation between market innovation and consumer protection, and the symphony of diverse AI cultures under a unifying regulatory baton. These quandaries do not lend themselves to straightforward resolutions; they demand nuanced and context-sensitive deliberations.

The ethical footprint of the Act will also depend on its reception within the AI community and the wider public sphere. Its legacy will be etched in the collective commitment to trust and responsibility across the AI ecosystem, involving developers, users, consumers, regulators, and policymakers. The vision is a Europe — and indeed, a world — where AI is synonymous with trustworthiness and accountability. This lofty goal transcends legal mandates, reaching into the realm of ethical conviction and societal engagement from every stakeholder.

In an era where artificial intelligence weaves through the fabric of society, the AI Act emerges as a pioneering and comprehensive legislative beacon, guiding AI towards a future that harmonizes technological prowess with human values.

The Act casts a wide net, touching on policy formulation, regulatory architecture, and the ethical lattice of AI applications across and beyond European borders. It stands as a testament to opportunity and foresight, yet it is not without its intricate tapestry of challenges and quandaries. The true measure of its influence lies not in its immediate enactment but in the organic adaptability and robust enforcement as the landscape of AI shifts and expands.

It’s crucial to articulate that this Act doesn’t represent the terminus of regulatory dialogue but inaugurates a protracted era of AI governance. It necessitates periodic refinement in lockstep with the march of innovation and the unveiling of new horizons and prospects. This legislative framework calls for a symphony of complementary endeavors: the investment in research, the enrichment of education, the deepening of public discourse, and the cultivation of global partnerships.

Embarking on this audacious path to an AI domain that is dependable, ethical, and human-centric is a collective venture. It demands a concerted commitment from all corners of the AI sphere — developers, users, policymakers, and citizens alike. It is an invitation to contribute to and bolster this trailblazing expedition into the domain of artificial intelligence — an odyssey that we all are integral to shaping.

 

 

Source: https://intpolicydigest.org/what-the-eu-gets-right-with-its-new-ai-rules/

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j