Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models

Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models

Author: Martin Sjölund

Publisher: Linköping University Electronic Press

Published: 2015-05-11

Total Pages: 243

ISBN-13: 9175190710

DOWNLOAD EBOOK

Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.


Book Synopsis Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models by : Martin Sjölund

Download or read book Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models written by Martin Sjölund and published by Linköping University Electronic Press. This book was released on 2015-05-11 with total page 243 pages. Available in PDF, EPUB and Kindle. Book excerpt: Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.


Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration

Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration

Author: Alachew Mengist

Publisher: Linköping University Electronic Press

Published: 2019-08-21

Total Pages: 95

ISBN-13: 9176850366

DOWNLOAD EBOOK

Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.


Book Synopsis Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration by : Alachew Mengist

Download or read book Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration written by Alachew Mengist and published by Linköping University Electronic Press. This book was released on 2019-08-21 with total page 95 pages. Available in PDF, EPUB and Kindle. Book excerpt: Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.


Content Ontology Design Patterns: Qualities, Methods, and Tools

Content Ontology Design Patterns: Qualities, Methods, and Tools

Author: Karl Hammar

Publisher: Linköping University Electronic Press

Published: 2017-09-06

Total Pages: 238

ISBN-13: 917685454X

DOWNLOAD EBOOK

Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency. Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation---neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts.


Book Synopsis Content Ontology Design Patterns: Qualities, Methods, and Tools by : Karl Hammar

Download or read book Content Ontology Design Patterns: Qualities, Methods, and Tools written by Karl Hammar and published by Linköping University Electronic Press. This book was released on 2017-09-06 with total page 238 pages. Available in PDF, EPUB and Kindle. Book excerpt: Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency. Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation---neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts.


Distributed Moving Base Driving Simulators

Distributed Moving Base Driving Simulators

Author: Anders Andersson

Publisher: Linköping University Electronic Press

Published: 2019-04-30

Total Pages: 42

ISBN-13: 9176850900

DOWNLOAD EBOOK

Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.


Book Synopsis Distributed Moving Base Driving Simulators by : Anders Andersson

Download or read book Distributed Moving Base Driving Simulators written by Anders Andersson and published by Linköping University Electronic Press. This book was released on 2019-04-30 with total page 42 pages. Available in PDF, EPUB and Kindle. Book excerpt: Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.


Scalable and Efficient Probabilistic Topic Model Inference for Textual Data

Scalable and Efficient Probabilistic Topic Model Inference for Textual Data

Author: Måns Magnusson

Publisher: Linköping University Electronic Press

Published: 2018-04-27

Total Pages: 53

ISBN-13: 9176852881

DOWNLOAD EBOOK

Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model. Probabilistiska ämnesmodeller (topic models) är en mångsidig klass av modeller för att estimera ämnessammansättningar i större corpusar. Applikationer finns i ett flertal vetenskapsområden som teknik, naturvetenskap, samhällsvetenskap och humaniora. I denna avhandling föreslås nya effektiva och parallella Markov Chain Monte Carlo algoritmer för Bayesianska ämnesmodeller. De föreslagna metoderna skalar väl med storleken på corpuset och kan användas för flera olika ämnesmodeller och liknande modeller inom språkteknologi. De föreslagna metoderna är snabba, effektiva, skalbara och konvergerar till den sanna posteriorfördelningen. Dessutom föreslås en ämnesmodell för högdimensionell textklassificering, med tonvikt på tolkningsbar dokumentklassificering genom att använda en kraftigt regulariserande priorifördelningar. Slutligen utvecklas en ämnesmodell för att analyzera "agenda" och "framing" för ett förutbestämt ämne. Med denna metod analyserar vi invandringsdiskursen i Sveriges Riksdag över tid, genom att kombinera teori från statsvetenskap, kommunikationsvetenskap och probabilistiska ämnesmodeller.


Book Synopsis Scalable and Efficient Probabilistic Topic Model Inference for Textual Data by : Måns Magnusson

Download or read book Scalable and Efficient Probabilistic Topic Model Inference for Textual Data written by Måns Magnusson and published by Linköping University Electronic Press. This book was released on 2018-04-27 with total page 53 pages. Available in PDF, EPUB and Kindle. Book excerpt: Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model. Probabilistiska ämnesmodeller (topic models) är en mångsidig klass av modeller för att estimera ämnessammansättningar i större corpusar. Applikationer finns i ett flertal vetenskapsområden som teknik, naturvetenskap, samhällsvetenskap och humaniora. I denna avhandling föreslås nya effektiva och parallella Markov Chain Monte Carlo algoritmer för Bayesianska ämnesmodeller. De föreslagna metoderna skalar väl med storleken på corpuset och kan användas för flera olika ämnesmodeller och liknande modeller inom språkteknologi. De föreslagna metoderna är snabba, effektiva, skalbara och konvergerar till den sanna posteriorfördelningen. Dessutom föreslås en ämnesmodell för högdimensionell textklassificering, med tonvikt på tolkningsbar dokumentklassificering genom att använda en kraftigt regulariserande priorifördelningar. Slutligen utvecklas en ämnesmodell för att analyzera "agenda" och "framing" för ett förutbestämt ämne. Med denna metod analyserar vi invandringsdiskursen i Sveriges Riksdag över tid, genom att kombinera teori från statsvetenskap, kommunikationsvetenskap och probabilistiska ämnesmodeller.


Companion Robots for Older Adults

Companion Robots for Older Adults

Author: Sofia Thunberg

Publisher: Linköping University Electronic Press

Published: 2024-05-06

Total Pages: 175

ISBN-13: 9180755747

DOWNLOAD EBOOK

This thesis explores, through a mixed-methods approach, what happens when companion robots are deployed in care homes for older adults by looking at different perspectives from key stakeholders. Nine studies are presented with decision makers in municipalities, care staff and older adults, as participants, and the studies have primarily been carried out in the field in care homes and activity centres, where both qualitative (e.g., observations and workshops) and quantitative data (surveys) have been collected. The thesis shows that companion robots seem to be here to stay and that they can contribute to a higher quality of life for some older adults. It further presents some challenges with a certain discrepancy between what decision makers want and what staff might be able to facilitate. For future research and use of companion robots, it is key to evaluate each robot model and potential use case separately and develop clear routines for how they should be used, and most importantly, let all stakeholders be part of the process. The knowledge contribution is the holistic view of how different actors affect each other when emerging robot technology is introduced in a care environment. Den här avhandlingen utforskar vad som händer när sällskapsrobotar införs på omsorgsboenden för äldre genom att titta på perspektiv från olika intressenter. Nio studier presenteras med kommunala beslutsfattare, vårdpersonal och äldre som deltagare. Studierna har i huvudsak genomförts i fält på särskilda boenden och aktivitetscenter där både kvalitativa- (exempelvis observationer och workshops) och kvantitativa data (enkäter) har samlats in. Avhandlingen visar att sällskapsrobotar verkar vara här för att stanna och att de kan bidra till en högre livskvalitet för vissa äldre. Den visar även på en del utmaningar med en viss diskrepans mellan vad beslutsfattare vill införa och vad personalen har möjlighet att utföra i sitt arbete. För framtida forskning och användning av sällskapsrobotar är det viktigt att utvärdera varje robotmodell och varje användningsområde var för sig och ta fram tydliga rutiner för hur de ska användas, och viktigast av allt, låta alla intressenter vara en del av processen. Kunskapsbidraget med avhandlingen är en helhetssyn på hur olika aktörer påverkar varandra när ny robotteknik introduceras i en vårdmiljö


Book Synopsis Companion Robots for Older Adults by : Sofia Thunberg

Download or read book Companion Robots for Older Adults written by Sofia Thunberg and published by Linköping University Electronic Press. This book was released on 2024-05-06 with total page 175 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis explores, through a mixed-methods approach, what happens when companion robots are deployed in care homes for older adults by looking at different perspectives from key stakeholders. Nine studies are presented with decision makers in municipalities, care staff and older adults, as participants, and the studies have primarily been carried out in the field in care homes and activity centres, where both qualitative (e.g., observations and workshops) and quantitative data (surveys) have been collected. The thesis shows that companion robots seem to be here to stay and that they can contribute to a higher quality of life for some older adults. It further presents some challenges with a certain discrepancy between what decision makers want and what staff might be able to facilitate. For future research and use of companion robots, it is key to evaluate each robot model and potential use case separately and develop clear routines for how they should be used, and most importantly, let all stakeholders be part of the process. The knowledge contribution is the holistic view of how different actors affect each other when emerging robot technology is introduced in a care environment. Den här avhandlingen utforskar vad som händer när sällskapsrobotar införs på omsorgsboenden för äldre genom att titta på perspektiv från olika intressenter. Nio studier presenteras med kommunala beslutsfattare, vårdpersonal och äldre som deltagare. Studierna har i huvudsak genomförts i fält på särskilda boenden och aktivitetscenter där både kvalitativa- (exempelvis observationer och workshops) och kvantitativa data (enkäter) har samlats in. Avhandlingen visar att sällskapsrobotar verkar vara här för att stanna och att de kan bidra till en högre livskvalitet för vissa äldre. Den visar även på en del utmaningar med en viss diskrepans mellan vad beslutsfattare vill införa och vad personalen har möjlighet att utföra i sitt arbete. För framtida forskning och användning av sällskapsrobotar är det viktigt att utvärdera varje robotmodell och varje användningsområde var för sig och ta fram tydliga rutiner för hur de ska användas, och viktigast av allt, låta alla intressenter vara en del av processen. Kunskapsbidraget med avhandlingen är en helhetssyn på hur olika aktörer påverkar varandra när ny robotteknik introduceras i en vårdmiljö


Parameterized Verification of Synchronized Concurrent Programs

Parameterized Verification of Synchronized Concurrent Programs

Author: Zeinab Ganjei

Publisher: Linköping University Electronic Press

Published: 2021-03-19

Total Pages: 192

ISBN-13: 9179296971

DOWNLOAD EBOOK

There is currently an increasing demand for concurrent programs. Checking the correctness of concurrent programs is a complex task due to the interleavings of processes. Sometimes, violation of the correctness properties in such systems causes human or resource losses; therefore, it is crucial to check the correctness of such systems. Two main approaches to software analysis are testing and formal verification. Testing can help discover many bugs at a low cost. However, it cannot prove the correctness of a program. Formal verification, on the other hand, is the approach for proving program correctness. Model checking is a formal verification technique that is suitable for concurrent programs. It aims to automatically establish the correctness (expressed in terms of temporal properties) of a program through an exhaustive search of the behavior of the system. Model checking was initially introduced for the purpose of verifying finite‐state concurrent programs, and extending it to infinite‐state systems is an active research area. In this thesis, we focus on the formal verification of parameterized systems. That is, systems in which the number of executing processes is not bounded a priori. We provide fully-automatic and parameterized model checking techniques for establishing the correctness of safety properties for certain classes of concurrent programs. We provide an open‐source prototype for every technique and present our experimental results on several benchmarks. First, we address the problem of automatically checking safety properties for bounded as well as parameterized phaser programs. Phaser programs are concurrent programs that make use of the complex synchronization construct of Habanero Java phasers. For the bounded case, we establish the decidability of checking the violation of program assertions and the undecidability of checking deadlock‐freedom. For the parameterized case, we study different formulations of the verification problem and propose an exact procedure that is guaranteed to terminate for some reachability problems even in the presence of unbounded phases and arbitrarily many spawned processes. Second, we propose an approach for automatic verification of parameterized concurrent programs in which shared variables are manipulated by atomic transitions to count and synchronize the spawned processes. For this purpose, we introduce counting predicates that related counters that refer to the number of processes satisfying some given properties to the variables that are directly manipulated by the concurrent processes. We then combine existing works on the counter, predicate, and constrained monotonic abstraction and build a nested counterexample‐based refinement scheme to establish correctness. Third, we introduce Lazy Constrained Monotonic Abstraction for more efficient exploration of well‐structured abstractions of infinite‐state non‐monotonic systems. We propose several heuristics and assess the efficiency of the proposed technique by extensive experiments using our open‐source prototype. Lastly, we propose a sound but (in general) incomplete procedure for automatic verification of safety properties for a class of fault‐tolerant distributed protocols described in the Heard‐Of (HO for short) model. The HO model is a popular model for describing distributed protocols. We propose a verification procedure that is guaranteed to terminate even for unbounded number of the processes that execute the distributed protocol.


Book Synopsis Parameterized Verification of Synchronized Concurrent Programs by : Zeinab Ganjei

Download or read book Parameterized Verification of Synchronized Concurrent Programs written by Zeinab Ganjei and published by Linköping University Electronic Press. This book was released on 2021-03-19 with total page 192 pages. Available in PDF, EPUB and Kindle. Book excerpt: There is currently an increasing demand for concurrent programs. Checking the correctness of concurrent programs is a complex task due to the interleavings of processes. Sometimes, violation of the correctness properties in such systems causes human or resource losses; therefore, it is crucial to check the correctness of such systems. Two main approaches to software analysis are testing and formal verification. Testing can help discover many bugs at a low cost. However, it cannot prove the correctness of a program. Formal verification, on the other hand, is the approach for proving program correctness. Model checking is a formal verification technique that is suitable for concurrent programs. It aims to automatically establish the correctness (expressed in terms of temporal properties) of a program through an exhaustive search of the behavior of the system. Model checking was initially introduced for the purpose of verifying finite‐state concurrent programs, and extending it to infinite‐state systems is an active research area. In this thesis, we focus on the formal verification of parameterized systems. That is, systems in which the number of executing processes is not bounded a priori. We provide fully-automatic and parameterized model checking techniques for establishing the correctness of safety properties for certain classes of concurrent programs. We provide an open‐source prototype for every technique and present our experimental results on several benchmarks. First, we address the problem of automatically checking safety properties for bounded as well as parameterized phaser programs. Phaser programs are concurrent programs that make use of the complex synchronization construct of Habanero Java phasers. For the bounded case, we establish the decidability of checking the violation of program assertions and the undecidability of checking deadlock‐freedom. For the parameterized case, we study different formulations of the verification problem and propose an exact procedure that is guaranteed to terminate for some reachability problems even in the presence of unbounded phases and arbitrarily many spawned processes. Second, we propose an approach for automatic verification of parameterized concurrent programs in which shared variables are manipulated by atomic transitions to count and synchronize the spawned processes. For this purpose, we introduce counting predicates that related counters that refer to the number of processes satisfying some given properties to the variables that are directly manipulated by the concurrent processes. We then combine existing works on the counter, predicate, and constrained monotonic abstraction and build a nested counterexample‐based refinement scheme to establish correctness. Third, we introduce Lazy Constrained Monotonic Abstraction for more efficient exploration of well‐structured abstractions of infinite‐state non‐monotonic systems. We propose several heuristics and assess the efficiency of the proposed technique by extensive experiments using our open‐source prototype. Lastly, we propose a sound but (in general) incomplete procedure for automatic verification of safety properties for a class of fault‐tolerant distributed protocols described in the Heard‐Of (HO for short) model. The HO model is a popular model for describing distributed protocols. We propose a verification procedure that is guaranteed to terminate even for unbounded number of the processes that execute the distributed protocol.


Analysis, Design, and Optimization of Embedded Control Systems

Analysis, Design, and Optimization of Embedded Control Systems

Author: Amir Aminifar

Publisher: Linköping University Electronic Press

Published: 2016-02-18

Total Pages: 155

ISBN-13: 917685826X

DOWNLOAD EBOOK

Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account. In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems. Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.


Book Synopsis Analysis, Design, and Optimization of Embedded Control Systems by : Amir Aminifar

Download or read book Analysis, Design, and Optimization of Embedded Control Systems written by Amir Aminifar and published by Linköping University Electronic Press. This book was released on 2016-02-18 with total page 155 pages. Available in PDF, EPUB and Kindle. Book excerpt: Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account. In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems. Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.


Studying Simulations with Distributed Cognition

Studying Simulations with Distributed Cognition

Author: Jonas Rybing

Publisher: Linköping University Electronic Press

Published: 2018-03-20

Total Pages: 94

ISBN-13: 9176853489

DOWNLOAD EBOOK

Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.


Book Synopsis Studying Simulations with Distributed Cognition by : Jonas Rybing

Download or read book Studying Simulations with Distributed Cognition written by Jonas Rybing and published by Linköping University Electronic Press. This book was released on 2018-03-20 with total page 94 pages. Available in PDF, EPUB and Kindle. Book excerpt: Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.


Beyond Recognition

Beyond Recognition

Author: Le Minh-Ha

Publisher: Linköping University Electronic Press

Published: 2024-05-06

Total Pages: 103

ISBN-13: 918075676X

DOWNLOAD EBOOK

This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications. Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology. Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.


Book Synopsis Beyond Recognition by : Le Minh-Ha

Download or read book Beyond Recognition written by Le Minh-Ha and published by Linköping University Electronic Press. This book was released on 2024-05-06 with total page 103 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications. Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology. Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.