How the EU is regulating crypto-assets with MiCAR and why you should care

How the EU is regulating crypto-assets with MiCAR and why you should care

The EU has recently adopted the Markets in Crypto-Assets Regulation (MiCAR). This groundbreaking legislation aims to provide a clear and consistent framework for regulating crypto-assets and related services in the EU. MiCAR will apply from the end of 2024, with some provisions applying from mid-2024.

MiCAR defines crypto-assets as “a digital representation of value or rights which may be transferred and stored electronically, using distributed ledger technology or similar technology.” This definition covers various types of crypto-assets, such as cryptocurrencies, tokens, stablecoins, and non-fungible tokens (NFTs). It excludes crypto-assets already regulated under existing EU financial services legislation, such as financial instruments, deposits, electronic money, or insurance products. I agree with this definition, as it is broad and neutral enough to capture the diversity and innovation of crypto-assets while also respecting the existing regulatory frameworks for other types of assets.

Furthermore, it classifies crypto-assets into three main categories: e-money tokens (EMTs), asset-referenced tokens (ARTs), and other tokens. EMTs are crypto-assets pegged to one official currency, such as Tether or USD Coin. ARTs are crypto-assets backed by a pool of assets, such as fiat currencies, commodities, or other crypto-assets. Other tokens are crypto-assets that have various purposes and characteristics, such as utility tokens, payment tokens, or governance tokens.

As mentioned above, MiCAR also introduces the concept of significant tokens for EMTs and ARTs, which are subject to additional requirements due to their potential impact on financial stability or monetary policy. The European Banking Authority (EBA) will identify and monitor significant tokens based on criteria such as the number of users, transaction values, interconnectedness with the financial system, or innovation or complexity of the token. I think this classification is reasonable and valuable, as it reflects the different functions and risks of crypto-assets while also allowing for some flexibility and adaptation. Personally, when I spoke to EU-based bankers who are considering ESG-related crypto funds, they mentioned that MiCAR should also consider the environmental and social impact of crypto-assets, especially those that consume a lot of energy or resources or those that may affect human rights or privacy. I did not comment on that, but I am well aware of their “crypto agenda”. Additionally, I also think that they should actively involve other stakeholders, such as consumers, investors, or developers, in identifying and monitoring significant tokens, as they may have valuable insights and feedback.

MiCAR imposes different authorization and supervision requirements for crypto-asset issuers and crypto-asset service providers (CASPs), depending on the type and significance of the crypto-asset. Crypto-asset issuers offer crypto-assets to the public or seek their admission to trading on a trading platform for crypto-assets. CASPs provide or perform services or activities related to crypto-assets, such as custody, exchange, execution, advice, or portfolio management. Crypto-asset issuers of EMTs and ARTs must obtain authorization from the competent authority of their home member state before offering or admitting such tokens to trading. They must also prepare and publish a white paper that discloses essential information about the crypto-asset project, such as the features, rights, and obligations of the crypto-asset, the risks and costs involved, the governance and technical arrangements, and the identity and contact details of the issuer. Do note that they do not need authorization but must comply with the white paper requirement and other general obligations.

CASPs must obtain authorization from the competent authority of their home member state before providing or performing any crypto-asset services or activities. They must also comply with prudential requirements, the conduct of business rules, safeguarding requirements, and anti-money laundering and counter-terrorism financing (AML/CTF) obligations. I support these requirements, as they aim to ensure the transparency, accountability, and responsibility of crypto-asset issuers and CASPs and protect the interests and rights of consumers, investors, and the public. On top of this, I think that MiCAR should also provide some incentives and benefits for crypto-asset issuers and CASPs that comply with these requirements, such as lower fees, faster processing, or broader access. I also think that MiCAR should promote cooperation and coordination among the competent authorities of different member states and other international regulators and organizations to avoid duplication, inconsistency, or conflict.

MiCAR also provides some transitionary provisions and exemptions for crypto-asset issuers and CASPs already operating in the EU before the application date of MiCAR. For example, those authorized or registered under national regimes in one or more member states may continue to operate in those member states until mid-2025 without obtaining authorization under MiCAR. However, they must comply with the relevant national rules and regulations and apply them by mid-2024 if they wish to operate in the EU after mid-2025.

They also established a pilot regime for distributed ledger technology (DLT) market infrastructures, which are a new type of market participants that use DLT to provide trading and settlement services for crypto-assets that qualify as financial instruments. The pilot regime aims to test the use of DLT in trading and post-trading crypto-assets while ensuring high investor protection and market integrity. The pilot regime will apply for five years from the application date of MiCAR, with a possibility of extension. These provisions are good in my opinion, as they recognize the diversity and maturity of the existing crypto-asset market in the EU and can provide a smooth and gradual transition to the new regulatory framework. They should also ensure a fair and equal treatment of all crypto-asset issuers and CASPs, regardless of origin, size, or status, and avoid creating undue advantages or disadvantages for some over others. If they can encourage and support the participation and experimentation of different actors and stakeholders in the pilot regime, such as incumbents, newcomers, or innovators, and foster a collaborative and inclusive environment for the development and adoption of DLT. This will be a big plus for them.

MiCAR does not apply to crypto-assets issued or guaranteed by central banks, member states, third countries, or public international organizations. It also does not apply to crypto-asset services or activities provided or performed by central banks or other public authorities in performing their public tasks or functions. These exemptions aim to preserve the monetary sovereignty and policy of the EU and its member states and facilitate the development of central bank digital currencies (CBDCs) and other public initiatives in the crypto-asset space. While I understand these exemptions, as they reflect the special and privileged status of central banks and public authorities and their role and responsibility in the monetary and financial system. However, I think MiCAR should also ensure a close and constructive dialogue and cooperation between the public and the private sectors and foster a balanced and complementary relationship between the traditional and innovative forms of money and finance. I also think that MiCAR should monitor and assess the impact and implications of CBDCs and other public initiatives on the crypto-asset market and address any potential issues or challenges that may arise.)

I also want to highlight that there are also some implications for investment firms and the travel rule, which are relevant to the crypto-asset market. Investment firms are those who provide or perform investment services or activities on a professional basis, such as execution of orders, portfolio management, or investment advice. The travel rule is a requirement that obliges financial institutions to exchange certain information about the originator and the beneficiary of a funds transfer, such as their names, addresses, account numbers, and transaction amounts.

They allow investment firms that are authorized under the Markets in Financial Instruments Directive 2014/65/EU (MiFID II) to provide or perform crypto-asset services or activities in relation to crypto-assets that qualify as financial instruments without obtaining additional authorization under MiCAR. However, they must comply with the relevant MiFID II rules and regulations, as well as some specific requirements under MiCAR, such as the safeguarding and AML/CTF obligations. Investment firms that wish to provide or perform crypto-asset services or activities concerning crypto-assets that do not qualify as financial instruments must obtain authorization and comply with its rules and regulations.

The travel rule applies to crypto-asset transfers, which are any transactions resulting in the change of ownership of one or more crypto-assets from one person to another. MiCAR requires CASPs that are involved in crypto-asset transfers to exchange certain information with other CASPs, such as the name and account number of the originator and the beneficiary, the amount and type of crypto-asset transferred, and the date and time of the crypto-asset transfer. The CASPs must ensure that the information is accurate, complete, secure, and confidentially transmitted. They must also keep records of the information for at least five years. They must implement the travel rule by mid-2024, the same date as applying the Financial Action Task Force (FATF) standards on virtual assets and virtual asset service providers.

They aim to establish a level playing field and a single market for crypto-assets and related services within the EU. This is achieved by harmonizing and simplifying the current national regulatory frameworks, thereby eliminating regulatory fragmentation and uncertainty. They also acknowledge the need for a degree of regulatory flexibility and discretion at the national level, which opens the door to regulatory arbitrage and competition among EU member states in specific areas. Some of the leading EU jurisdictions for MiCAR compliance and regulatory arbitrage are France, Germany, and Malta. These jurisdictions have already adopted national regimes for crypto-assets and related services, which are solid, flexible, favorable, attractive, and clear and consistent. They also have supportive and innovative regulators, such as the AMF, BaFin, and MFSA, which have issued several guidance and recommendations on crypto-assets and related services. They also have robust and diversified crypto-asset ecosystems, with several established and emerging players. These jurisdictions are likely to maintain and enhance their leading positions in the crypto-asset market under MiCAR, as they have a competitive edge and a first-mover advantage over other member states.

To sum up, MiCAR is a landmark legislation shaping the future of crypto-assets in the EU. It will introduce legal certainty, consumer protection, market integrity, and financial stability and foster innovation and competition by enabling cross-border activities and passporting rights for crypto-asset issuers and CASPs within the EU.

They are visionary and ambitious legislation that reflects the importance and potential of crypto-assets and related services and that responds to the needs and expectations of the crypto-asset community and society at large. It is also a complex and dynamic legislation that requires constant monitoring and evaluation and may face some difficulties and uncertainties in its application and enforcement. I hope that MiCAR will be able to adapt and evolve with the changing and growing nature of crypto-assets and related services and that it will be able to achieve its objectives and benefits.

I look forward to seeing the development and implementation of this framework, and I hope it will contribute to the growth and maturity of the crypto-asset industry in the EU and beyond.

 

Source: https://www.financialexpress.com/business/digital-transformation-how-the-eu-is-regulating-crypto-assets-with-micar-and-why-you-should-care-3434243/

 

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j

What the EU Gets Right with its New AI Rules

What the EU Gets Right with its New AI Rules

The European Union’s latest effort to rein in artificial intelligencethe AI Act, marks a pivotal step towards regulating a technology that is as pervasive as it is potent. With its public unveiling on January 21, the Act lays a framework that seeks to harness AI’s capabilities while safeguarding the fundamental tenets of trust, ethics, and human rights.

As we unpack the Act’s dimensions, we will weigh its merits against its potential impediments to the trajectory of AI, not just within the confines of Europe but as a precedent for the global stage. The discourse around this groundbreaking legislation is as much about its current form as it is about the dialogue it engenders concerning the future interplay of artificial intelligence with our societal mores and economic frameworks.

Does it strike the right balance?

The AI Act introduces a risk-based regulatory schema, categorizing AI systems into unacceptable, high-risk, limited-risk, and minimal-risk. The Act prohibits ‘unacceptable risk’ AI systems, such as manipulative social scoring and covert emotional manipulation, to protect individual rights. ‘High-risk’ AIs, pivotal in healthcare, education, and law enforcement, face rigorous requirements including human oversight. ‘Limited-risk’ AIs, like chatbots, must disclose their AI nature to users. Lastly, ‘minimal-risk’ AIs, like video games, have minimal regulatory constraints, promoting innovation while safeguarding against abuses.

The AI Act is crafted with the dual goals of fostering technological innovation and upholding fundamental rights. The Act’s targeted regulatory focus seeks to minimize undue burdens on AI practitioners by emphasizing the control of applications with the most potential for harm. However, it is not without its detractors. Critics point to its ostensibly broad and ambiguous language, which may leave too much open to interpretation, potentially leading to legal uncertainties.

The Act’s broad definition of AI as a technology-neutral concept, its reliance on subjective terminology like “significant” risk, and the discretionary power it affords to regulatory bodies are seen as potential stumbling blocks, raising concerns over possible inconsistencies and confusion for stakeholders within the EU’s digital marketplace.

A significant challenge the EU’s AI Act faces is ensuring consistent enforcement across all member states. To address this, the Act constructs an elaborate governance structure that includes the European Artificial Intelligence Board and national authorities, bolstered by bodies responsible for market surveillance. The Act stipulates robust penalties for non-compliance, including fines of up to 7% of global annual turnover. Beyond punitive measures, it emphasizes the role of self-regulation, expecting AI entities to undertake conformity assessments and maintain risk management protocols. The Act also recognizes the importance of global cooperation, considering the divergent AI regulatory landscapes outside the EU.

The efficacy of the Act will ultimately hinge on the collective engagement and adherence of all parties to its stipulated frameworks.

Some pros and cons of the AI Act

The AI Act directly addresses the burgeoning field of advanced technologies, focusing on generative AI, biometric identification, and the nascent realm of quantum computing. These technologies hold transformative potential across diverse sectors including healthcare, education, entertainment, security, and scientific research.

Yet, with great potential comes a spectrum of challenges, particularly concerning ethical issues like bias and discrimination, as well as concerns over privacy, security, and accountability. The Act confronts these challenges head-on by instituting rules and obligations tailored to specific AI categories. For instance, generative AI systems — which can create new, diverse outputs such as text, images, audio, or video from given inputs — must adhere to stringent transparency obligations. This is particularly pertinent as generative AIs like ChatGPT and DALL-E find broader applications in content creation, education, and other domains.

The Act acknowledges the potential for malicious use of generative AI, such as spreading disinformation, engaging in fraudulent activities, or launching cyberattacks. To counteract this, it mandates that any AI-generated or manipulated content must be identifiable as such, either through direct communication to the user or through built-in detectability. The goal is to ensure that users are not deceived by AI-generated content, maintaining a level of authenticity and trust in digital interactions.

Additionally, the Act requires AI systems that manipulate content to be designed in such a way that their outputs can be discerned as AI-generated by humans or other AI systems. This provision aims to preserve the integrity of information and preclude the erosion of factual standards in the digital age.

The AI Act is intentionally crafted to harmonize technological progress with the protection of foundational societal norms and values. The Act’s efficacy is predicated on the meticulous application of these regulations, keeping pace with the rapid development of AI technologies.

Turning to biometric identification systems, these tools are capable of recognizing individuals based on unique physical or behavioral traits such as facial features, fingerprints, voice, or even patterns of movement. While they offer enhancements in security, border management, and personalized access, they simultaneously raise substantial concerns for individual rights, including privacy and the presumption of innocence.

The Act specifically addresses the sensitive nature of biometric identification, incorporating stringent controls over its deployment. It notably restricts the use of real-time biometric identification systems in public areas for law enforcement, barring a few exceptions where the circumstances are critically compelling — such as locating a missing child, thwarting a terrorist threat, or tackling grave criminal activity.

In cases where biometric techniques are employed for law enforcement, the Act mandates prior approval from an independent authority, ensuring that any use is necessary, proportionate, and coupled with human review and protective measures. This regulatory stance underlines a commitment to uphold civil liberties even as we advance into an era of increasingly sophisticated digital surveillance tools.

Harnessed from the enigmatic realm of quantum physics, quantum computing emerges as a technological titan capable of calculations that dwarf the prowess of traditional computers. With the power to sift through vast data and unlock solutions to hitherto intractable problems, its potential spans the spectrum from cryptography to complex simulations, and from optimization to machine learning. Yet, this same capability ushers in novel risks: the crumbling of current cryptographic defenses, the birth of unforeseen security breaches, and the potential to tilt global power equilibria. The European Union’s AI Act, while not directly addressing quantum computing, encompasses AI systems powered by such quantum techniques within its regulatory embrace, mandating adherence to established rules based on the assessed risk and application context. Moreover, the Act presciently signals the need for persistent exploration and innovation in this sphere, advocating for the creation of encryption that can withstand the siege of quantum capabilities.

The Act’s influence on the vanguard of technology is paradoxical. It affords a measure of predictability and a compass for AI practitioners and end-users alike, weaving a safety net for the digital citizenry. Conversely, it may erect hurdles that temper the speed of AI progress and competitive edge, leaving a mist of ambiguity over the governance and stewardship of AI. The true measure of the Act’s imprint will reveal itself in the finesse of its enforcement, its interpretative flexibility, and its dance with the ever-evolving tempo of AI innovation.

Ethical considerations

The ethical tapestry of the AI Act is rich and intricate, advocating for an AI that is at once robust, ethical, and centered around human dignity, reflecting and magnifying the EU’s core values. It draws inspiration from the Ethics Guidelines for Trustworthy Artificial Intelligence, which delineate seven foundational requirements for the ethical deployment of AI, from ensuring human agency to nurturing environmental and societal flourishing. These principles are not merely aspirational; they are translated into tangible and binding mandates that shape the conduct of AI creators and users.

This ambitious ethical framework, however, does not come without its conundrums and concessions. It grapples with the dynamic interplay of competing interests and ideals: the equilibrium between AI’s boon and bane, the negotiation between stakeholder rights and obligations, the delicate dance between AI autonomy and human supervision, the reconciliation between market innovation and consumer protection, and the symphony of diverse AI cultures under a unifying regulatory baton. These quandaries do not lend themselves to straightforward resolutions; they demand nuanced and context-sensitive deliberations.

The ethical footprint of the Act will also depend on its reception within the AI community and the wider public sphere. Its legacy will be etched in the collective commitment to trust and responsibility across the AI ecosystem, involving developers, users, consumers, regulators, and policymakers. The vision is a Europe — and indeed, a world — where AI is synonymous with trustworthiness and accountability. This lofty goal transcends legal mandates, reaching into the realm of ethical conviction and societal engagement from every stakeholder.

In an era where artificial intelligence weaves through the fabric of society, the AI Act emerges as a pioneering and comprehensive legislative beacon, guiding AI towards a future that harmonizes technological prowess with human values.

The Act casts a wide net, touching on policy formulation, regulatory architecture, and the ethical lattice of AI applications across and beyond European borders. It stands as a testament to opportunity and foresight, yet it is not without its intricate tapestry of challenges and quandaries. The true measure of its influence lies not in its immediate enactment but in the organic adaptability and robust enforcement as the landscape of AI shifts and expands.

It’s crucial to articulate that this Act doesn’t represent the terminus of regulatory dialogue but inaugurates a protracted era of AI governance. It necessitates periodic refinement in lockstep with the march of innovation and the unveiling of new horizons and prospects. This legislative framework calls for a symphony of complementary endeavors: the investment in research, the enrichment of education, the deepening of public discourse, and the cultivation of global partnerships.

Embarking on this audacious path to an AI domain that is dependable, ethical, and human-centric is a collective venture. It demands a concerted commitment from all corners of the AI sphere — developers, users, policymakers, and citizens alike. It is an invitation to contribute to and bolster this trailblazing expedition into the domain of artificial intelligence — an odyssey that we all are integral to shaping.

 

 

Source: https://intpolicydigest.org/what-the-eu-gets-right-with-its-new-ai-rules/

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j

EU AI Act: A Significant Step Toward Global AI Governance

EU AI Act: A Significant Step Toward Global AI Governance

In recent years, Artificial Intelligence (AI) has emerged as a powerful tool that has transformed many aspects of modern life, including creating and consuming content. Using generative AI tools like ChatGPT has opened up new possibilities for content creation but has also raised new challenges and questions around copyright. The issue of copyright and AI-generated content is complex, involving various legal and ethical considerations.

As AI technologies become more prevalent in content creation, it is essential to address the questions of ownership, attribution, and compensation for AI-generated works. One of the primary challenges is that existing copyright laws are struggling to keep up with the rapid advancements in AI technology. The current legal framework, designed for traditional forms of content creation, may not be adequately equipped to address the unique aspects of AI-generated content.

Moreover, as AI-generated content becomes more prevalent, it is crucial to consider the ethical implications, particularly around issues such as bias, privacy, and accountability. AI algorithms can amplify existing biases, leading to unfair treatment of certain groups or individuals. Additionally, AI-generated content can raise privacy concerns as it may involve the use of personal data.

To address these challenges, policymakers, industry leaders, and other stakeholders are working to establish clear guidelines and regulations that balance the interests of creators, users, and AI technologies while considering the ethical implications of AI-generated content. For instance, the European Union (EU) is currently drafting the AI Act, a new law aimed at regulating the use of AI technologies in the EU. We will talk more about this in this article.

What is EU AI Act?

The European Union (EU) introduced the EU AI Act in April 2021, proposing a comprehensive legal and regulatory framework for AI. The proposed regulation covers all types of AI in various sectors, including entities that use AI systems professionally. The regulation aims to tackle challenges and risks linked to AI development and deployment, including discriminatory and rights-violating AI.

The EU AI Act primarily puts the responsibility on AI system providers to create a legal framework for developing, distributing, and using AI. The regulation includes broad and general articles to ensure its application across different industries and use cases. The EU AI Act is currently undergoing the legislative process and is subject to the ordinary legislative procedure for the EU. Members of the European Parliament agreed on the AI Act preliminarily in April 2023, and the text is scheduled to proceed to a plenary vote in June 2023. Upon approval, the EU AI Act will be among the first AI-specific regulations in the world.

It is essential to note that the EU AI Act is a significant development in regulating AI systems as it comprehensively and uniformly addresses the associated risks and challenges comprehensively and uniformly. The regulation’s general nature ensures adaptability and applicability across different industries and use cases, marking a significant step towards AI regulation in the EU.

How would EU AI Act help with generative works?

The EU AI Act, a proposed regulation for the use of AI technology, may also help regulate the use of generative works. The act includes provisions on transparency, data quality, and human oversight, which are relevant to developing and using generative AI models such as ChatGPT. In particular, the act would require companies that use AI tools to disclose any copyrighted materials employed in developing their systems. This could help prevent the unauthorized use of intellectual property in generative works. Additionally, the EU proposes requiring companies that provide generative AI services to explain the reasons and ethical standards for their decisions.

It’s worth noting that generative AI tools, like ChatGPT, have also come under scrutiny in other areas. For example, the US Consumer Financial Protection Bureau (CFPB) examines how generative AI tools could propagate bias or misinformation and create risks in the financial services sector. Some experts have pointed out that algorithms used by generative AI tools like ChatGPT could be subject to legal protections similar to those that govern the content on social media platforms like YouTube.

Generative AI was not prominently featured in the original proposal for the AI Act, as it only had one mention of “chatbot” in the 108-page document. However, the act has been revised to include stricter rules for “foundation model” systems, which include generative AI systems like ChatGPT. The revised text also emphasizes the importance of developing European standards for AI, which could help ensure that generative AI models meet the act’s essential requirements for different levels of risk.

Risks and challenges associated with the development and deployment of AI

The development and deployment of AI come with various risks and challenges that must be addressed to ensure its ethical use. One of the main concerns is that AI systems, if not implemented correctly, can violate human rights and discriminate against marginalized communities. Discriminatory AI systems can lead to biased decision-making processes that disproportionately affect certain groups, such as migrants, refugees, and asylum seekers.

Moreover, AI systems that interact with physical objects, such as autonomous vehicles and robots, have the potential to cause harm, making safety and security a significant ethical concern in AI development. The development of AI-generated code can also lead to unintended consequences, and LLMs’ ability to generate functional code is limited, making them powerful tools for answering high-level but specific technical questions.

To address these challenges, the Asilomar AI Principles recommend that AI systems be developed and employed to reduce the risk of unintentional harm to humans. It is also important to ensure that AI systems are designed to be inclusive and transparent and to minimize the risk of unintentional harm to human users.

As the EU and the US are jointly pivotal to the future of global AI governance, it is crucial to ensure that EU and US approaches to AI risk management are generally aligned to facilitate bilateral trade. At the same time, AI developers need to establish safeguards that protect users from potential risks. OpenAI, for instance, has established AI safeguards and has a vision for AI’s ethical and responsible development.

How is the United States looking at AI copyright?

The topic of AI copyright rules in the United States is a complex and evolving issue. Several recent legal cases and proposed regulations shed light on the current state of the law.

One major concern is whether AI-generated works can be protected by copyright law. Currently, most countries, including the US, require a human author for copyright protection to arise. However, ongoing discussions and proposed legislation may change this requirement in the future.

Another issue is the use of copyrighted material in training AI models. Some AI tools are trained on massive datasets that contain copyrighted works without obtaining specific licensing for this use. This raises questions about whether this use constitutes copyright infringement.

Recent legal cases also shed light on the issue of AI copyright rules. For example, Getty Images filed a lawsuit against Stability AI in February 2023, alleging copyright, trademark infringement, and trademark dilution.

In April 2023, the US Supreme Court heard a case that could have implications for AI-generated works. The case concerns fair use law and whether AI tools can be protected under it.

Proposed regulations in the European Union may also have an impact on AI copyright rules in the US. The EU is drafting the AI Act to regulate emerging AI technology, including copyright and intellectual property issues.

In conclusion

In conclusion, EU lawmakers have agreed that companies using generative AI tools like ChatGPT will have to disclose any copyrighted material used in developing their systems as part of a larger draft law known as the AI Act. It is a big move in my opinion.

The complex issue of AI-generated content and copyright requires attention from both legal and ethical perspectives. While debates and lawsuits continue regarding the use of generative AI tools in content creation, it is apparent that current copyright laws are struggling to keep up with technological advancements.

As AI continues to revolutionize content production and consumption, policymakers and industry leaders must collaborate to establish guidelines that balance the interests of creators, users, and AI technologies. These guidelines should provide clarity on issues like ownership, attribution, and compensation for AI-generated content.

It is also essential to consider the ethical implications of AI-generated content, including issues like bias, privacy, and accountability. As AI-generated content becomes more prevalent, it is crucial to ensure responsible and transparent production and use.

To address this issue, policymakers, industry leaders, and other stakeholders must work together to establish clear guidelines and regulations. These regulations should balance the interests of all parties involved and take into account the ethical implications of AI-generated content. This effort is critical in ensuring that AI continues transforming content creation and consumption fairly, equitably, and responsibly.

 

Source: https://www.securities.io/eu-ai-act-a-significant-step-toward-global-ai-governance/

Anndy Lian is an early blockchain adopter and experienced serial entrepreneur who is known for his work in the government sector. He is a best selling book author- “NFT: From Zero to Hero” and “Blockchain Revolution 2030”.

Currently, he is appointed as the Chief Digital Advisor at Mongolia Productivity Organization, championing national digitization. Prior to his current appointments, he was the Chairman of BigONE Exchange, a global top 30 ranked crypto spot exchange and was also the Advisory Board Member for Hyundai DAC, the blockchain arm of South Korea’s largest car manufacturer Hyundai Motor Group. Lian played a pivotal role as the Blockchain Advisor for Asian Productivity Organisation (APO), an intergovernmental organization committed to improving productivity in the Asia-Pacific region.

An avid supporter of incubating start-ups, Anndy has also been a private investor for the past eight years. With a growth investment mindset, Anndy strategically demonstrates this in the companies he chooses to be involved with. He believes that what he is doing through blockchain technology currently will revolutionise and redefine traditional businesses. He also believes that the blockchain industry has to be “redecentralised”.

j j j