Skip to content

Commit fb1f910

Browse files
fixed project pages
1 parent 21988fd commit fb1f910

File tree

5 files changed

+10
-7
lines changed

5 files changed

+10
-7
lines changed

content/project/BIFOLD/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ categories: []
99
date: 2022-01-20T11:16:31+01:00
1010

1111
# Optional external URL for project (replaces project detail page).
12-
external_link: "https://bifold.berlin/de/"
12+
external_link: "https://www.bifold.berlin/"
1313

1414
# Featured image
1515
# To use, add an image named `featured.jpg/png` to your page's folder.

content/project/DEEPLEE/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ categories: []
99
date: 2021-02-23T11:16:31+01:00
1010

1111
# Optional external URL for project (replaces project detail page).
12-
external_link: "https://www.dfki.de/en/web/research/projects-and-publications/projects-overview/project/deeplee"
12+
external_link: "https://www.deeplee.de/"
1313

1414
# Featured image
1515
# To use, add an image named `featured.jpg/png` to your page's folder.

content/project/Data4Transparency/index.md

+1
Original file line numberDiff line numberDiff line change
@@ -40,3 +40,4 @@ url_video: ""
4040
# Otherwise, set `slides = ""`.
4141
slides: ""
4242
---
43+
According to the World Bank and the UN, some US$1tn is paid in bribes every year. Corrupt financial transactions divert funds from legitimate public services, as well as distort free markets—potentially thwarting economic development—and reduce trust in institutions. The Organized Crime and Corruption Reporting Project (OCCRP) is a global platform for investigative reporting, providing resources to journalists and media centres, enabling cost-effective collaboration between editors and offering tools to secure themselves against threats to independent media. Exposing previously-unknown connections between entities makes it possible for citizens, policymakers, activists and law enforcement agencies to act. As the number of such leaks and publications grows, there is an increasing need for effective, scalable and reproducible methods to discover any anomalies and evidence of malfeasance that might exist within them.

content/project/GenKI4Media/index.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
11
---
22
# Documentation: https://wowchemy.com/docs/managing-content/
33

4-
title: "GenAI4Media - Generative AI assistants for the media, cultural and creative sectors"
5-
summary: "Generative AI models have made great progress in recent years and have achieved impressive results. However, these models have so far only been of limited use to SMEs, as they are not sufficiently adapted to the specialised domains of companies and therefore produce erroneous content more frequently than in general fields of knowledge. In addition, the underlying generation process is often opaque and hard to follow for (lay) users. All these factors have a detrimental effect on trust in the models and their output, reducing their acceptance and thus also the development of optimised or new business processes. Particularly in the media, cultural and creative sectors, day-to-day editorial work is still characterised by time-consuming processes that require manual research and integration of multimodal materials as well as laborious checks on quality and legal requirements. The aim of the GenKI4Media project is to tap into the innovative potential of generative AI with three new generative AI assistants for (1) ‘Generating multimodal media formats for culture, politics and education’, (2) ‘Standards and regulations in the media sector’ and (3) ‘Demonstrators for the creative/cultural sector’ in order to effectively support editorial work. The AI assistants can be used dynamically for a wide range of tasks and do not have to be individually programmed for each task and target group, as was previously the case. The basis for the assistants is an innovative, continual development of AI technologies through plug-ins for knowledge organisation and transparency of LLMs.
6-
7-
The aim of the DFKI sub-project is to research and develop methods and generative AI models to improve the transparency, traceability and trustworthiness of AI-generated content. To achieve these goals, the DFKI subproject focuses on three complementary R&D areas. The first area deals with the development of conversational methods for explainable AI that enable the end user to explore explanations in the form of an interactive dialogue. The second area covers the design and development of methods and algorithms that enable generated content to be automatically related to external sources, validated on the basis of these sources and corrected if necessary. The third area comprises the design and creation of task- and domain-specific test data sets, so-called challenge test sets, which rigorously test critical cases in generation tasks - for example, the avoidance of incorrect causal or temporal conclusions, factual misgenerations (hallucinations), as well as the correct processing of long tail or very domain-specific information."
4+
title: "GenKI4Media - Generative AI assistants for the media, cultural and creative sectors"
5+
summary: "GenAI models have so far only been of limited use to SMEs, as they are not sufficiently adapted to the specialised domains of companies and therefore produce erroneous content more frequently than in general fields of knowledge. The aim of the GenKI4Media project is to tap into the innovative potential of generative AI with three new generative AI assistants for (1) 'Generating multimodal media formats for culture, politics and education', (2) 'Standards and regulations in the media sector' and (3) 'Demonstrators for the creative/cultural sector' in order to effectively support editorial work."
86

97
authors: [leonhard-hennig]
108
tags: [Trustworthiness, Evaluation, Large Language Models]
@@ -42,3 +40,7 @@ url_video: ""
4240
# Otherwise, set `slides = ""`.
4341
slides: ""
4442
---
43+
44+
Generative AI models have made great progress in recent years and have achieved impressive results. However, these models have so far only been of limited use to SMEs, as they are not sufficiently adapted to the specialised domains of companies and therefore produce erroneous content more frequently than in general fields of knowledge. In addition, the underlying generation process is often opaque and hard to follow for (lay) users. All these factors have a detrimental effect on trust in the models and their output, reducing their acceptance and thus also the development of optimised or new business processes. Particularly in the media, cultural and creative sectors, day-to-day editorial work is still characterised by time-consuming processes that require manual research and integration of multimodal materials as well as laborious checks on quality and legal requirements. The aim of the GenKI4Media project is to tap into the innovative potential of generative AI with three new generative AI assistants for (1) 'Generating multimodal media formats for culture, politics and education', (2) 'Standards and regulations in the media sector' and (3) 'Demonstrators for the creative/cultural sector' in order to effectively support editorial work. The AI assistants can be used dynamically for a wide range of tasks and do not have to be individually programmed for each task and target group, as was previously the case. The basis for the assistants is an innovative, continual development of AI technologies through plug-ins for knowledge organisation and transparency of LLMs.
45+
46+
The goal of the DFKI sub-project is to research and develop methods and generative AI models to improve the transparency, traceability and trustworthiness of AI-generated content. To achieve these goals, the DFKI subproject focuses on three complementary R&D areas. The first area deals with the development of conversational methods for explainable AI that enable the end user to explore explanations in the form of an interactive dialogue. The second area covers the design and development of methods and algorithms that enable generated content to be automatically related to external sources, validated on the basis of these sources and corrected if necessary. The third area comprises the design and creation of task- and domain-specific test data sets, so-called challenge test sets, which rigorously test critical cases in generation tasks - for example, the avoidance of incorrect causal or temporal conclusions, factual misgenerations (hallucinations), as well as the correct processing of long tail or very domain-specific information.

content/project/PLASS/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ categories: []
99
date: 2021-02-23T11:16:31+01:00
1010

1111
# Optional external URL for project (replaces project detail page).
12-
external_link: "https://plass.io"
12+
external_link: "https://www.plass-projekt.de"
1313

1414
# Featured image
1515
# To use, add an image named `featured.jpg/png` to your page's folder.

0 commit comments

Comments
 (0)