Joakim Lindén, Industrial Doctoral Student


I did my undergraduate studies at Linköping University M. sc Applied Physics and Electrical Engineering (Y) with alignment towards Theoretical Physics. During this time I spent 1 year on exchange in Melbourne at Royal Melbourne Institue of Technology, RMIT.

In 2006 I joined Saab as an FPGA Developer and has since then worked on numerous realtime embedded hardware projects, always in a video context.

I worked in the Netherlands at Amsterdam Scientific Instruments 2014-2015 with CERN technology spinoff products, accelerating access to a particle detector/X-ray camera.

In 2015 I returned to Saab as a Specialist/Technical Fellow in Video & Graphics Technology with focus on embedded image processing and Deep Learning. I have supervised a number of Master Thesis students over the years from several different universities, as well as participated in several research projects between Saab and MDU, of which one I am the project leader.

In 2022 I enroled as an Industrial PhD student to allow a deeper dive into the technology, with the long-term goal to learn and implement new methods in daily industry work.

Safety is of paramount importance in the aviation industry. Still, there is a strong need for continuous improvement and it is becoming increasingly clear that data-driven development methods such as Machine Learning (ML) and Deep Learning (DL) will play a large role in future and present avionics platforms. Nevertheless, the systems still need to be sufficiently safe, which is where new development methodology and verification methodology is needed.

My research is focused towards the data acquisition process and data management process in general. Since the function of a DL model to a great extent is defined by the data it has been trained on, great care need to be taken to assure the correctness and representativeness of the data with respect to the formulated problem.

Collecting real-world data is expensive and sometimes prohibited, considering e.g.
safety aspects or legal restrictions. By generating the bulk of training data by synthetic means it
is possible to impose arbitrary and extensive scene randomization for increased data diversity. By
applying domain adaptation to said diverse dataset one ends up with diverse and representative
data, perfectly annotated due to its synthetic origin. The domain adaptation in my research is an unsupervised ML process, meaning the involved datasets do not all need to be annotated.

Challenges with this method includes ensuring the domain adaptation process does not semantically alter the rendered scenes, possibly introducing new sources of inaccuracy due to label flipping.