AI Utilization Strategy Taskforce
Committee on Digital Economy
Keidanren (Japan Business Federation)
1. General Points
The Proposal for an Artificial Intelligence (AI) Act#1 recently published by the European Commission aims to encourage the development and implementation of trusted AI as a way of resolving environmental and social issues. In this, it shares the same general aims as the Keidanren's Trusted Quality AI Ecosystem.
At the present stage, however, some ambiguities and room for interpretation remain with regard to the definitions of prohibited and high-risk AI and other terms, and there is a risk that this will hinder the appetite for investment in Europe and the fostering and strengthening of new AI companies, possibly having a negative impact on innovation and national security. Before the Act is passed into law, terms should be clarified and explanations added, along with guidelines and other provisions.
Also, given the rapidly increasing speed of technological innovation and social implementation in the AI field, moving discussions forward without regular consideration of the latest conditions may cause confusion. In determining the concrete specifics of the applicability and content of the regulations, an adequate process should be established for dialogue that includes representatives of industries from outside the EU that will be affected by the regulations at forums including the EAIB (European Artificial Intelligence Board) and international standardization bodies (ISO/IEC JTC1 SC42).
At the same time, there is a need to establish a transparent framework to review the content of the regulations in an ongoing and flexible manner by continuing close dialogue even after the regulations have been introduced. In introducing new laws and regulations, these restrictions should be kept to the necessary minimum, in order to maximize the benefits to society accruing from the use and implementation of cutting-edge technology.
The imposition of strict regulations on AI providers alone has the potential to obstruct the formation of a trusted AI ecosystem?the opposite effect to that intended. To ensure the appropriate use of AI, rather than placing all the responsibility on AI providers alone, we hope the regulations will make clear the need for efforts across the entire AI ecosystem, and that steps will be taken to encourage users to make efforts to this end as well.
2. Individual points
Article 2 Scope
(Excerpt from p.39)
- There should be clarification of the obligations imposed on providers and users of AI systems where only the output generated by an AI system is used in the EU (for example, video content created by a company outside the EU using an AI system), rather than the provision of the AI system itself.
Article 3 Definitions
(Excerpt from p.40)
- Assuming the provider has taken steps to implement appropriate security measures, there should be an exemption in the case of malicious attacks from users or systems.
(Excerpt from p.40)
- The definition of “safety component” should be made clearer and concrete examples provided within each field of technology. For example, it should be made clear to what extent these provisions would apply to AI camera home monitoring systems, such as fire detectors, baby monitors, and security systems.
- There should be clarification of who is responsible for deciding whether a device constitutes a “safety component.” If the provider is to be responsible for this decision, steps should be taken to provide the necessary information and other support, including the designation of a specialist body that providers can approach for advice with difficult cases.
(Excerpt from p.42)
- There should be clarification of the criteria of the terms “at a distance” and “prior knowledge of the user” as well as of a definition of the user.
- It should be made clear that a system that temporarily detects and categorizes facial or physical characteristics but does not identify an individual (for example, an AI system that ascertains overall customer flow within a store without focusing on individuals, for use in marketing purposes by a private company) does not count as a “remote biometric identification system” and does not constitute high-risk AI.
Article 5 The following artificial intelligence practices shall be prohibited
Article 6 Classification rules for high-risk AI systems
- A risk-based approach that uses excessively broad definitions of prohibited and high-risk AI runs the risk of inhibiting innovation within the EU. Accordingly, the scope of applicability should be made more limited, and the scope of the regulations made clearer, along with the methods for measuring and assessing risk. The applicability of the regulations could be determined based on individual use cases and their associated levels of risk.
- One category of prohibited AI is AI that is used to produce subliminal effects. The definition should be limited to AI that makes deliberate use of subliminal techniques. AI systems in which subliminal effects may be produced unintentionally, such as audiovisual contents, gaming, and commercial messaging for marketing, should not be covered by the regulations.
- One prohibited example of AI involves the use of a “real-time” remote biometric identification system for the purpose of law enforcement. It should be made clear whether this includes cases in which law enforcement agencies use such a system for security within their own premises, or cases in which a private company sends a report to law enforcement agencies after its AI system detects signs of suspicious activity. The scope of the term “publicly accessible spaces” should be clarified with specific examples, taking into account the risks involved in individual use cases.
- Consideration should be given to the cost effectiveness of the responsibilities placed on high-risk AI stakeholders. For example, the introduction and operation of the provisions laid down in Chapter 2 (including risk management systems, data governance, and documentation and record keeping), Chapter 3 (including quality management systems), and Chapter 5 (including conformity assessment) are likely to have a significant impact on stakeholders, as indicated in the impact assessment (SWD (2021) 84 final). There is therefore a need for more detailed discussions on these sections in the future.
Article 6 Classification rules for high-risk AI systems
(Excerpt from p.45)
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
- In diagnostic AI that supports the maintenance and upkeep of a “safety component,” the output of the AI is not used in a compulsory manner, but decisions are made by a human operator using output information produced by the AI. It should therefore be possible to reduce the risk posed by the AI. Consequently, classification rules (a) and (b) should be eased, either in whole or in part.
Article 7 Amendments to Annex III
(Excerpt from p.45)
2. (h) the extent to which existing Union legislation provides for:
(ii) effective measures to prevent or substantially minimise those risks.
- Article 7 (2) (h) states that the Commission will take into account the scope of existing EU legislation when considering delegated acts to update the list in Annex III. In this case, opportunities should be provided to canvass a wide range of opinions from industry representatives, to avoid introducing excessive regulations that do not match the actual circumstances.
- The addition of excessive regulations should be avoided in fields where sufficient third-party certification systems are already in place under existing regulations and systems, such as regulations for medical devices.
Article 9 Risk management system
(Excerpt from pp.46-47)
(a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system;
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; (text omitted)
- This section refers to analysis and evaluation of known and foreseeable risks: the specific scope of these terms and the measures required should be clarified, along with concrete examples.
- Provided that an AI provider issues a precautionary warning to prevent misuse or malicious use of the AI, bearing in mind the possibility that AI models may change after being placed on the market, AI users should be held responsible for problems arising from malicious use or misuse within the scope covered by the provider's precautionary warning.
(Excerpt from p.47)
(a) elimination or reduction of risks as far as possible through adequate design and development; (text omitted)
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.
- It should be made clear how much information providers need to maintain and share with users in order to provide information conducive to the appropriate use of high-risk AI and information relating to analysis and evaluation of risks.
- The requirement for “elimination or reduction of risks as far as possible through adequate design and development” should be clarified, and guidelines should be provided that also consider risk trade-offs.
Article 10 Data and data governance
- Guidelines consistent with the GDPR should be provided as soon as possible regarding the use of remote biometric identification and high-risk AI for which the pseudonymization and encryption requirements of the GDPR may apply.
- It would be desirable to consider the use of infrastructure technology#2 for global data free flow, in order to allow smooth analysis and transfer of data across borders, while complying strictly with the data protection laws of individual countries and regions.
(excerpt from p.48)
(d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent; (text omitted)
3. Training, validation and testing data sets shall be relevant, representative, free of errors and complete. (Text omitted)
- Clarification should be given on the intended meaning of the terms “formulation of assumptions” and “free of errors and complete.” The obligations should be made realistic and verifiable.
(Excerpt from p.48)
- Taking geographic elements into account as requirements for data sets should be avoided, as this would make it difficult to learn from globally uniform data, and because the act of evaluating individual characteristics and elements in itself risks creating bias.
Article 11 Technical documentation
(Excerpt from p.49)
- This section requires companies to “provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements.” However, from the viewpoint of maintaining confidentiality, thorough-going discussions on this point should be held with multi-stakeholders including industry representatives.
Article 12 Record-keeping
- Guidelines should be published explaining what laws and regulations should be considered in drawing up contracts, and what division of responsibilities should be decided in contracts. This should include validating contract processes that are in line with relevant existing legislation.
(Excerpt from p.49)
3. In particular, logging capabilities shall enable the monitoring of the operation of the high-risk AI system with respect to the occurrence of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or lead to a substantial modification, and facilitate the post-market monitoring referred to in Article 61.
- The definition of “substantial modification” should be clarified.
- Logging requirements should be restricted to a scope that can be envisaged and verified in advance.
- In requiring providers to maintain log data and to carry out monitoring based on data after being placed on the market, from the perspective of protecting personal information and business confidentiality, it would be preferable to avoid requiring providers to store more data than is absolutely necessary.
Article 13 Transparency and provision of information to users
(Excerpt from p.50)
- Regarding the requirement to ensure transparency and provide information to users, guidelines should be provided with practicable and concrete examples, including whether XAI#3 is assumed.
(Excerpt from p.50)
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;
- Since it is difficult to list all the factors including external factors that may apply, it should be made clear that “limitations” refers to those limitations that lie within the scope considered by the provider.
- Since the scope of what can be foreseen will vary from company to company, the wording “can be expected, and any known and foreseeable circumstances” and “any known and foreseeable circumstances” should be amended to require a provider to provide users with explanations within the scope foreseen by the provider.
Article 14 Human oversight
(Excerpt from p.51)
- In considering specific oversight requirements within each field, steps should be taken to ensure that these are practical for companies, taking into consideration the following three points. (1) Proper consideration should be given to the level of human involvement required, taking into consideration the respective advantages and disadvantages of oversight by human beings and by AI. (2) This requirement should be omitted or relaxed in cases where steps to eliminate risk have been incorporated at the design stage of the system including AI through an independent safety structure, even if the level of human involvement is low. (3) This requirement should be omitted or relaxed in cases where it has been demonstrated that the AI is free from bias or more accurate compared with a verification of operations by a human operator.
(Excerpt from p.51)
(b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
- A clearer explanation should be given of the content of requirements with regard to the obligation to “remain aware of the possible tendency” of “automation bias.”
Article 15 Accuracy, robustness and cybersecurity
(Excerpt from pp.51-52)
- The scope intended by the phrase “appropriate level” is unclear. Since the ability of companies to foresee these factors will vary, this should be amended to apply only within the scope that a company can foresee.
(Excerpt from p.52)
- Responding to this requirement to implement “mitigation measures” in AI systems that continue to learn would only be possible by signing contracts with a wide range of companies throughout the supply chain as contracting parties, and further multi-stakeholder discussion is needed on this point.
Article 16 Obligations of providers of high-risk AI systems
(Excerpt from p.52)
1. (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title;
- Providers should be able to obtain an exemption from this requirement on the condition that they fulfill a certain level of due diligence on risk-verification. For example, in a case where a user alters a product or service supplied by the provider in order to allow a method of use deemed to be prohibited AI, the provider should be released from responsibility if this is done by a user without the involvement of the provider.
- In Clause (34) on p.26, with regard to “AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity,” the risk of system malfunction is generally reduced through multiplexing and redundancy. In cases where such measures are in place, they should be included as factors to be considered when imposing obligations on high-risk AI.
Article 19 Conformity assessment
(Excerpt from p.54)
- To ensure that conformity assessment does not become a barrier to market participation for companies from outside the EU, a framework should be put in place as soon as possible to allow companies from outside the bloc to get advice in advance from the authorities on the Artificial Intelligence Act and related regulations and to obtain public certification.
Article 24 Obligations of product manufacturers
(Excerpt from p.55)
- In light of the content of the GDPR and other existing laws and regulations, consideration should be given to the division of responsibilities between users and providers/manufacturers, taking into account the balance between innovation and regulation.
Article 29 Obligations of users of high-risk AI systems
(Excerpt from p.58)
- Since it is difficult for providers to shoulder all the responsibility for malicious use, the regulations should contain a clearly stated prohibition on using a system except for the intended use.
Article 43 Conformity assessment
(Excerpt from p.65)
- Because AI systems are improved in an agile manner through software updates and learning, the requirement to carry out regular conformity assessments would place an excessive burden on the entity carrying out the assessments, and should therefore be avoided.
- In Article 3, Definitions (23), the term “substantial modification,” cited as a condition requiring a new conformity assessment, should be more clearly defined.
Article 52 Transparency obligations for certain AI systems
(Excerpt from p.69)
- The scope and requirements of “AI systems intended to interact with natural persons” should be clarified.
(Excerpt from p.69)
- This section requires an AI system that generates or manipulates image, audio, or video content to disclose that the content has been artificially generated or manipulated, but there should be clarification about the methods by which this disclosure is to be made. Similarly, if CGI (Computer Generated Imagery) is to be covered by this provision, there needs to be a clarification of the method of disclosure.
Article 57 Structure of the Board
(Excerpt from p.72)
- Multi-stakeholders including industry representatives and AI specialists should be involved on an ongoing basis in the European Artificial Intelligence Board as committee members and should participate in its discussions and deliberations. Further, the composition and size of the committee board should be continually reviewed in response to the development and progress of the technology.
Article 60 EU database for stand-alone high-risk AI systems
(Excerpt from p.74)
- The purposes of public disclosure should be limited to the purpose of “assisting providers and others in making a decision as to whether a system constitutes high-risk AI.” A database and collection of examples should be compiled of cases deemed to represent high-risk AI and cases that are not, and these should be used to guarantee the transparency of decisions on applicability of the regulations. Further, a framework should be put in place to make decisions in response to queries from providers as to whether a given system or application is deemed to constitute high-risk AI, and the results of these decisions should be regularly added to the database and made publicly available.
Article 61 Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems
(Excerpt from pp.74-75)
- In carrying out post-market monitoring based on a template for the monitoring plan, room should be left for providers to obtain log data through contracts with users.
Article 64 Access to data and documentation
(Excerpt from p.77)
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system.
- For providers, source code is a vital resource that is the source of their competitiveness as a company, and in some cases providers may not be able to disclose the source code for contractual reasons or reasons of security. There is also a risk that user concerns about the possibility that their data might be disclosed to authorities will hinder the introduction of AI, and access to data by authorities should therefore be avoided. If suspicions arise that make an investigation necessary, consideration should be given to appropriate ways of addressing the situation without relying on disclosure of source code, such as requiring companies to be accountable for providing an explanation in the first instance.
- Article 73 of the Japan-EU EPA clearly prohibits demands for access to source code between Japan and the EU, and consistency with this article should be made clear.
Article 69 Codes of conduct
(Excerpt from p.80)
- Consideration should be given to awarding incentives to companies that draw up appropriate codes of conduct.
Article 71 Penalties
(Excerpt from p.82)
- If the scope of penalties is too wide and the size of fines too large, there is a risk of excessively restricting the activities of companies in the European market. Appropriate penalties should therefore be put in place, taking into account the nature and content of the offense, the extent of the benefit gained as a result, and the presence or otherwise of malicious intent.
- Materials kept by the authorities regarding legislation based on this Act should kept for a period sufficiently longer than the period for which companies keep logs and similar materials, since these materials may play an important role as evidence in cases where the litigation period becomes prolonged.
Article 73 Exercise of the delegation
(Excerpt from p.45)
- Considering the impact on business, when the Commission is considering updating the list in the Annex, it should listen to the opinions of multi-stakeholders, including industry representatives and AI specialists, and should make decisions carefully and prudently, having ensured transparency of debate.
Article 84 Evaluation and review
(Excerpt from p.87)
- In carrying out evaluations and revisions, the Commission should give due consideration to international best practices, based on discussions with multi-stakeholders including industry representatives and AI specialists.
Annex III High-Risk AI Systems Referred to in Article 6 (2)
- The definitions of “high-risk AI” and “safety component” should be clarified.
- Systems should not be counted as high-risk AI where the risk of human rights infringements is sufficiently low. In the case of biometrics and categorizations, for example, systems should not be counted as high-risk AI if it has been confirmed that there is no risk of discrimination through the identification of attributes.
Annex IV Technical Documentation referred to in Article 11 (1)
(Excerpt from p.6)
(a) the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
- Since some of the methods and steps used in development may be confidential, it may be difficult to disclose all this information. This section should be amended to allow non-disclosure of content relating to the confidentiality of company secrets.
(Excerpt from p.6)
- “General logic” should be clearly defined.
- Detailed explanations of AI systems are required to include “decisions about trade-offs,” but this should be clarified to require only the listing of trade-offs actually considered.
- No requirement should be made to list considerations regarding verifications and settings for optimized systems and trade-offs exceeding the scope decided in discussions with users.
- Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (published on April 21, 2021)
- Platform technology that automatically configures and executes programs at data collection sites that perform the necessary processing for data transfer, such as masking of highly confidential and sensitive data as stipulated by the GDPR and other data protection laws, and automatically transfers data to data analysis sites.
- XAI (explainable artificial intelligence) refers to AI in which the processes leading to the results of predictions or estimates can be explained by a human.