<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://fordatis.fraunhofer.de/handle/fordatis/33">
    <title>Fordatis Sammlung:</title>
    <link>https://fordatis.fraunhofer.de/handle/fordatis/33</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://fordatis.fraunhofer.de/handle/fordatis/480" />
        <rdf:li rdf:resource="https://fordatis.fraunhofer.de/handle/fordatis/448" />
        <rdf:li rdf:resource="https://fordatis.fraunhofer.de/handle/fordatis/435" />
        <rdf:li rdf:resource="https://fordatis.fraunhofer.de/handle/fordatis/383" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-01T11:37:39Z</dc:date>
  </channel>
  <item rdf:about="https://fordatis.fraunhofer.de/handle/fordatis/480">
    <title>Triaxial Vibration Data of a Milling Process: Sharp and Blunt Tool Condition</title>
    <link>https://fordatis.fraunhofer.de/handle/fordatis/480</link>
    <description>Titel: Triaxial Vibration Data of a Milling Process: Sharp and Blunt Tool Condition
Datenautorinnen und Datenautoren: Liebermann, Joris
Zusammenfassung: This dataset represents a triaxial vibration dataset acquired during a face‑milling process under two distinct tool conditions (sharp and worn). Experiments were conducted on a 3‑axis CNC milling machine (VECTOR 850 M SI) using a 10 mm solid carbide end mill with 6 flutes, machining a flat steel workpiece in conventional face milling. The milling feed direction was aligned with the machine x‑axis. Cutting parameters were kept constant across both tool conditions: 60 % radial depth of cut, 2 mm axial depth of cut, spindle speed of 3500 rpm, and feed rate of 600 mm/min.&#xD;
&#xD;
Vibrations were measured with a Bosch BMI270 MEMS accelerometer mounted on the spindle housing. The sensor axes were aligned with the machine coordinate system (x: feed direction, y: transverse in‑plane, z: spindle axis). Triaxial acceleration signals were recorded as raw time series at a sampling frequency of 1600 Hz and are provided in units of g. The dataset consists of two NumPy .npy files, one for each tool condition: SharpTool_1600Hz_xyz.npy (sharp tool) and WornTool_1600Hz_xyz.npy (worn tool). Each file contains a 2D NumPy array of shape (n_samples, 3), corresponding to time samples and the three acceleration components (x, y, z). No additional label files are required, as the tool condition is encoded in the file name. The dataset is intended for developing and benchmarking signal processing and machine learning methods for tool wear detection and related diagnostics in milling operations.</description>
    <dc:date>2026-01-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://fordatis.fraunhofer.de/handle/fordatis/448">
    <title>Benchmarking Learnable Mesh and Texture Representations for Immersive Digital Twins</title>
    <link>https://fordatis.fraunhofer.de/handle/fordatis/448</link>
    <description>Titel: Benchmarking Learnable Mesh and Texture Representations for Immersive Digital Twins
Datenautorinnen und Datenautoren: Müller, Linus; Bätz, Michel; Berg, André; Gray, Timothy; Gul, Muhammad Shahzeb Khan; Schinabeck, Christian; Keinert, Joachim
Zusammenfassung: Neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) use volumetric scene representations to achieve impressive visual results in the field of novel-view synthesis. However, traditional 3D pipelines are dominated by textured meshes, supported by hardware assisted rendering and a huge software ecosystem. We show that mesh-based workflows can also profit from those novel reconstruction methods by evaluating mesh reconstruction algorithms paired with view-dependent textures in terms of texture sharpness, surface accuracy and real-time rendering performance. For that purpose, we employ a modular 3D reconstruction pipeline and use it to benchmark not only publicly available data sets, but additionally four new high-quality data sets of our own. These data sets capture different objects containing both reflective and uniform surface characteristics.</description>
    <dc:date>2025-06-01T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://fordatis.fraunhofer.de/handle/fordatis/435">
    <title>Evidence Contextualization and Counterfactual Attribution for Conversational QA over Heterogeneous Data with RAG Systems</title>
    <link>https://fordatis.fraunhofer.de/handle/fordatis/435</link>
    <description>Titel: Evidence Contextualization and Counterfactual Attribution for Conversational QA over Heterogeneous Data with RAG Systems
Datenautorinnen und Datenautoren: Saha Roy, Rishiraj; Schlotthauer, Joel; Hinze, Chris; Foltyn, Andreas; Hahn, Luzian; Küch, Fabian
Zusammenfassung: Retrieval Augmented Generation (RAG) works as a backbone for interacting with an enterprise's own data via Conversational Question Answering (ConvQA). In a RAG system, a retriever fetches passages from a collection in response to a question, which are then included in the prompt of a large language model (LLM) for generating a natural language (NL) answer. However, several RAG systems today suffer from two shortcomings: (i) retrieved passages usually contain their raw text and lack appropriate document context, negatively impacting both retrieval and answering quality; and (ii) attribution strategies that explain answer generation typically rely only on similarity between the answer and the retrieved passages, thereby only generating plausible but not causal explanations. In this work, we demonstrate RAGONITE, a RAG system that remedies the above concerns by: (i) contextualizing evidence with source metadata and surrounding text; and (ii) computing counterfactual attribution, a causal explanation approach where the contribution of an evidence to an answer is determined by the similarity of the original response to the answer obtained by removing that evidence. To evaluate our proposals, we release a new benchmark ConfQuestions: it has 300 hand-created conversational questions, each in English and German, coupled with ground truth URLs, completed questions, and answers from 215 public Confluence pages. These documents are typical of enterprise wiki spaces with heterogeneous elements. Experiments with RAGONITE on ConfQuestions show the viability of our ideas: contextualization improves RAG performance, and counterfactual explanations outperform standard attribution.</description>
    <dc:date>2025-03-10T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://fordatis.fraunhofer.de/handle/fordatis/383">
    <title>CherrySet - A comprehensive dataset of three cherry trees throughout the 2023 season</title>
    <link>https://fordatis.fraunhofer.de/handle/fordatis/383</link>
    <description>Titel: CherrySet - A comprehensive dataset of three cherry trees throughout the 2023 season
Datenautorinnen und Datenautoren: Gilson, Andreas; Meyer, Lukas; Uhrmann, Franz; Killer, Annika; Scholz, Oliver; Keil, Fabian; Stamminger, Marc; Kittemann, Dominikus; Noack, Patrick
Zusammenfassung: CherrySet encompasses a collection of 2D images covering three sweet cherry trees at more than 10 distinct time points, spanning from dormancy in March through blooming and growth until harvest in July 2023. In addition to the image data, CherrySet offers manually recorded ground truth information obtained from reference branches throughout the growing season, which includes comprehensive bud, blossoms and fruit counts during all vegetation phases, as well as the total number of cherries gathered at harvest.</description>
    <dc:date>2023-10-19T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

