Ny bok: Dockfabriken

Den 14 maj släpper jag och Lena Karlin en AI-deckare som heter Dockfabriken och ges ut på Norstedts Förlag. Mediekontakt är  Katarina Lindell.

Så här står det på baksidan:

Dockfabriken

Dockfabriken_omslagEn spännande och överraskande kriminalroman på temat artificiell intelligens. Nya deckarduon Karlin & Schwarz ställer viktiga frågor om vad som definierar en människa och var gränsen går för vad som bör vara tillåtet att göra.

Den framstående robotforskaren Heinrich B

ecker och hans banbrytande uppfinning, roboten Adam, kidnappas. Säkerhetspolisen Alex Lindhage utreder fallet. Vid ett förhör med forskarkollegan Clara inser han att hennes relation till Heinrich är starkt präglad av konkurrens, och han misstänker att hon döljer något.

Med hjälp av Heinrichs fru Nora försöker Alex förstå vilka som kan ligga bakom kidnappningen, men antalet misstänkta stiger för varje dag. Vad arbetade Heinrich egentligen med? När Alex lägger samman bitarna framträder en bild av den framgångsrike forskaren som är mer chockerande än han någonsin kunnat ana.

Dockfabriken är en fängslande kriminalroman som också utforskar moraliska frågor kring robotar och mänsklighet. Uppslukande deckarläsning för alla som gillade serien Äkta människor.

Lena Karlin har översatt ett hundratal böcker, bland annat Dan Brown och Ken Follett. Åsa Schwarz har flera skönlitterära verk bakom sig. Hon förekommer ofta som expert i media inom säkerhet och artificiell intelligens. 2017 blev hon utsedd till Sveriges säkerhetsprofil.

Utgivningsdatum: 2020-05-14
Recensionsdatum: 2020-06-01
ISBN: 978-91-1-310205-4

Lagstiftningen om artificiell intelligens måste synkas med lagstiftningen och metoder inom cybersäkerhet

Under Tallinn Digital Summit 2018, deltog jag i sessionen ”Safty and Security in the age of artificial intelligence” tillsammans med fyra ministrar och tre andra specialister inom området. Så här öppnade jag sessionen:

Skärmavbild 2018-10-17 kl. 14.36.45.png”I have recently worked on a number of implementations of cyber security laws in organisations. I would like to briefly explain the need for a legal foundation that enforce safe development of AI systems and, above all, the need for it to be in line with previous cyber security legislation and methods.

Within corporations and other organisations, current risk management regarding IT systems is primarily based on two different points of view. The first is the risks regarding the organisation itself which needs to be managed in order to securely continue with operations. The second is the individual perspective which is regulated by privacy laws, like for example the Data Protection Act. Here, the risks and potential repercussions of mismanagement of personal data are analysed. Within organisations that handle a large amount of sensitive personal information and within government bodies, current legislation requires an independent Data Protection Officer who ensures compliance with existing legal requirements.

From a societal point of view, we have a different legislation which focuses on activities of importance to Europe for example. An example of this is the NIS-Directive which aims to ensure the reliability and security of network and information services which are essential to everyday activities.

The problem is that currently we lack a comprehensive legal framework to protect society – and the rest of the world – against organisations which are irresponsible in their development of artificial intelligence. Furthermore, there are no acknowledged standards, methods or indeed precedents within the area. As a result, as long as the integrity of the management of personal data is maintained, there are currently no restrictions, other than ethical, on any irresponsible development of artificial intelligence.

To manage the gap between regulation and the capability of the new systems, it will be essential to introduce processes within the organisations which focus on the management of risks associated with artificial intelligence. However, there is no need to reinvent the wheel. Current cyber security methods and guidelines can be complemented by our current knowledge of research within artificial intelligence. Notably, potential risks are far more wide-ranging than cybersecurity and have a large impact on fairness, ethics, transparency and accountability.

To manage these risks, I have four suggestions:

The first is to define the fundamental principles that should guide the development of artificial systems from a security, fairness, ethics, transparency and accountability point of view.

The second is to legislate against the irresponsible development of artificial intelligence. This legislation can be similar to the Data Protection Act, but with a focus on the protection of society as opposed to the protection of the individual.

The third is to define a model for the safe development of artificial systems which the legislation can refer to. Such a model could be used to determine whether right tests have been performed and to ensure that correct principles for system architecture and design have been adhered to. I really want to emphasize that such a model should not deviate from but rather complement existing models and processes for secure development like for example Microsoft’s Security Development Lifecycle or Privacy by Design. Any large deviation from existing frameworks may not only jeopardise the ability of the organisations to implement them but may also be prohibitively expensive.

The fourth is that developers of artificial intelligence systems need to have a process for independent verification. An example could be an independent representative who verifies that the organisation complies with the legislation, an AI Protection Officer with a similar position as the current Data Protection Officer.

Finally, I want to re-emphasise that all legislation within the area must mirror existing legislation and methods for secure development. Otherwise we will not get the results we are aiming for.

Åsa Schwarz”

Sessionen om säkerhet och artificiell intelligens hade följande deltagare:

Skärmavbild 2018-10-17 kl. 09.27.42.png