Austrian security research is growing. The new Kybernet-Pass programme focuses on cyber security and supports research, companies and authorities in making Austria’s digital future more secure.
Threats coming from the digital world are now part of everyday life. Anyone can become a victim – companies are blackmailed digitally and citizens receive calls from fraudsters. In order to better protect the population and to raise awareness, the national security research funding framework named Austrian Safety Pin was created, consisting of the civil security research programme KIRAS, the defence research programme FORTE, and will be completed in 2024 by the cyber security research programme Kybernet- Pass (K-PASS). Together they fund research projects with an average of 19 million euros per year to keep Austria ready for the future in terms of security policy in unstable times.
Research for Safety
Be it the constantly growing field of cyber
security, the support of first responders, the
protection of critical infrastructures or concepts
for securing the supply chains that ensure
the supply of the Austrian population
with staple foods as well as the most important
raw materials – it is crucial that the research
projects contribute to preparing the
actual but also the economic security and
well-being of Austrians for future challenges.
An essential means to this end is to make
the research results usable. Only if findings
are applied in a timely manner can they unfold
their full benefit and help in overcoming
current crises or even prevent the emergence
of future crisis. Successful solutions
such as cybersecurity from Austria can
strengthen the domestic economy, not least
if they are exported as concepts to other
countries by means of technology transfer
and thus contribute to increasing value creation
and securing jobs in Austria. To provide
some examples in addition to theory and figures,
we would like to present some Austrian-
led projects we supported recently.
KIRAS projects with added value
SINBAD
SINBAD
Security and prevention of organised online
order fraud for users through digital forensics
measures. The SINBAD project is researching the automated detection of fake
shops in order to pro-actively protect consumers
from internet fraud.
Prevention and speed are key instruments
to protect consumers from fraudulent
offers in e-commerce. However, reports from
affected consumers come in too late and often
the damage is done before a warning
can be published. The Austrian initiative
Watchlist Internet of the ÖIAT is working on
intensifying the technological applicability of
integrated procedures of automatic detection
based on machine learning. Significant successes
have already been achieved in preliminary
projects with consortium partners under
the leadership of the AIT’s Center for
Digital Safety and Security: These include
the classification of fake shops by fingerprints
in the source code with machine learning
detection rates of over 90% and the publication
of a comprehensive corpus data set.
Safe space: Online violence
against women in (former) relationships
With the increasing digitisation of all areas of
social life, the number of abusive uses or increasingly
disappearing. Thus, even a spatial
separation – such as fleeing to a women’s
shelter – cannot exclude that the
affected women will not continue to be exposed
to online violence. The current situation
for women who are affected by online
violence from their (ex-)partner shows that
despite high estimates in prevalence figures,
there is a lack of social science knowledge to
be able to develop handling strategies in relevant
fields of practice (social work, police,
justice) that meet the need of the affected
women for security in their private sphere.
Those affected often do not find a safe
space where they could protect themselves
from the permanent threat and control. Previous
research results show that the sphere
of violence is increased by technological
means. Spaces that can potentially offer protection
for the affected women are thus increasingly increasingly
disappearing. Thus, even a spatial
separation – such as fleeing to a women’s
shelter – cannot exclude that the
affected women will not continue to be exposed
to online violence. The current situation
for women who are affected by online
violence from their (ex-)partner shows that
despite high estimates in prevalence figures,
there is a lack of social science knowledge to
be able to develop handling strategies in relevant
fields of practice (social work, police,
justice) that meet the need of the affected
women for security in their private sphere.
This reveals a gap between the current
state of knowledge and the current state of
technology.
Defalsif-AI
Detection of false information by means of
artificial intelligence. Defalsif-AI addresses in
the context of media-forensic tools (hybrid
threats / fake news) in particular politically
motivated disinformation, which weakens or
threatens political and state institutions of
our democracy – e.g. influencing elections –
and thus ultimately public trust in political
and state institutions. The research focuses
on audiovisual media forensics, text analysis
and their multimodal fusion with the help of
artificial intelligence (AI). The focus is on the
comprehensible and interpretable presentation
of the results in order to reach and optimally
support the broadest possible user
base. The aim of the project is to demonstrate
a proof-of-concept tool for the analysis
of digital content on the internet, which enables
an initial assessment of the content
(text, image, video, audio) for credibility/authenticity
and thus creates the basis for further
recommendations for action. A comprehensive
analysis and assessment of the media
forensic tool from a legal and social
science perspective, the derivation of application-
oriented, technological and organisational
measures as well as an exploitation
plan for the future operation of disinformation
analysis platforms that conform to the
rule of law round off the project.