Coming theses from other universities

Solvable Topological Boundaries
The hallmark of topological phases of matter is the presence of robust boundary states. In this dissertation, a formalism is developed with which analytical solutions for these states can be straightforwardly obtained by making use of destructive interference, which is naturally present in a large family of lattice models. The validity of the solutions is independent of tightbinding parameters, and as such these lattices can be seen as a subset of solvable systems in the landscape of tightbinding models. The approach allows for a full control of the topological phase of the system as well as the dispersion and localization of the boundary states, which makes it possible to design lattice models possessing the desired topological phase from the bottom up. Further applications of this formalism can be found in the fields of higherorder topological phases—where boundary states localize to boundaries with a codimension larger than one—and of nonHermitian Hamiltonians—which is a fruitful approach to describe dissipation, and feature many exotic features, such as the possible breakdown of bulkboundary correspondence—where the access to exact solutions has led to new insights.

3D Electron Microscopy Methods and Applications : Structures from Atomic Scale to Mesoscale
The crystal structure determines the physical properties of a material. The structure can be analysed at different levels, from atomic level, mesoscale level, all the way up to the macroscale level. Transmission Electron Microscope (TEM) is a powerful tool for studying the structure of materials at atomic scale level and mesoscale level because of the short wavelength of the electrons. At atomic scale level, structure determination using TEM can be performed in diffraction mode. The recent developments in 3D electron diffraction methods make structure determination from nano and micronsized crystals much easier than before. However, due to the strong interactions, electrons can be scattered multiple times through the crystal, causing the measured intensities to be less accurate than that in the Xray case.
In this thesis, we use the continuous rotation electron diffraction (cRED) developed in our group to investigate the structure of materials and the accuracy of this method. In the third chapter, we use cRED method to determine the structure of two aluminophosphate zeolites, PST13 and PST14. We presented that these structures can be built from two pairs of enantiomeric structural building units. In the fourth chapter, we show that despite the inaccuracy in measured intensities originated from dynamical effect, it is still possible to determine the structure accurately. We show that the atomic coordinates of ZSM5 and sucrose crystal structure determined by multiple electron diffraction datasets is identical to that determined from Xray data or neutron data. We also assessed the linearity between calculated structure factor and observed structure factor and use this as a coarse assessment indicator for diffraction data quality for protein crystals.
Apart from atomic structure, mesoscale structures, such as mesopores, can also determine the property of materials. For the 3D structures of these nanoscale structures, we can also use TEM electron tomography techniques to investigate. In chapter five, we performed electron tomography for two different materials with mesoporous structure and illustrated the formation mechanism of mesoporous magnesium carbonate and the internal tunnel structure of hierarchical TS1 zeolite.

Towards Reliable Gene Regulatory Network Inference
Phenotypic traits are now known to stem from the interplay between genetic variables across many if not every level of biology. The field of gene regulatory network (GRN) inference is concerned with understanding the regulatory interactions between genes in a cell, in order to build a model that captures the behaviour of the system. Perturbation biology, whereby genes or RNAs are targeted and their activity altered, is of great value for the GRN field. By first systematically perturbing the system and then reading the system's reaction as a whole, we can feed this data into various methods to reverse engineer the key agents of change.
The initial study sets the groundwork for the rest, and deals with finding common ground among the sundry methods in order to compare and rank performance in an unbiased setting. The GeneSPIDER (GS) MATLAB package is an inference benchmarking platform whereby methods can be added via a wrapper for testing in competition with one another. Synthetic datasets and networks spanning a wide range of conditions can be created for this purpose. The evaluation of methods across various conditions in the benchmark therein demonstrates which properties influence the accuracy of which methods, and thus which are more suitable for use under given characterized condition.
The second study introduces a novel framework NestBoot for increasing inference accuracy within the GS environment by independent, nested bootstraps, \ie repeated inference trials. Under low to medium noise levels, this allows support to be gathered for links occurring most often while spurious links are discarded through comparison to an estimated null distribution of shuffledlinks. While noise continues to plague every method, nested bootstrapping in this way is shown to increase the accuracy of several different methods.
The third study applies NestBoot on real data to infer a reliable GRN from an small interfering RNA (siRNA) perturbation dataset covering 40 genes known or suspected to have a role in human cancers. Methods were developed to benchmark the accuracy of an inferred GRN in the absence of a true known GRN, by assessing how well it fits the data compared to a null model of shuffled topologies. A network of high confidence was recovered containing many regulatory links known in the literature, as well as a slew of novel links.
The fourth study seeks to infer reliable networks on large scale, utilizing the high dimensional biological datasets of the LINCS L1000 project. This dataset has too much noise for accurate GRN inference as a whole, hence we developed a method to select a subset that is sufficiently informative to accurately infer GRNs. This is a first step in the direction of identifying probable submodules within a greater genomescale GRN yet to be uncovered.

Role of ecological processes in determining effects of contaminants in aquatic ecosystems
Aquatic ecosystems cover approximately 70% of the Earth’s surface and support a wide range of ecosystem services. Despite their importance, aquatic ecosystems are increasingly exposed to anthropogenic stressors, such as contaminants and climate change impacts. Ecosystems comprise a complex web of interactions both between organisms and between organisms and the abiotic environment. While there is extensive evidence for the importance of ecological processes in determining net ecosystem effects of contaminants, most often their effects are studied in isolation and in a single species setting.
The aim of this thesis is to investigate the ecological effects of contaminants in aquatic ecosystems, ranging from cellular to ecosystem endpoints, by using model ecosystems of increasing complexity. This thesis studies the effects of ionising radiation on the biochemical composition of microalgae and how these may affect consumers (Paper I), as well its effects on an artificial freshwater ecosystem (microcosms) in terms of ecological processes (Paper II) and carbon flows (Paper III). Finally, the thesis investigates the combined effects of a flame retardant and increased temperature on a model ecosystem comprised of a seminatural Baltic Sea community (Paper IV).
Ionising radiation caused biochemical changes in primary producers that affected the next trophic level, where the consumer responded with an increased feeding rate, suggesting a change in the food quality of the primary producer (Paper I). The microcosms exposed to ionising radiation showed significant dose related effects on photosynthetic parameters for all macrophyte species. Dose dependent trends were seen in snail grazing rates and reproduction indicating a potential for longterm effects (Paper II). Similarly, the carbon flow networks (Paper III) also indicated that the main effect of radiation was a decline in primary production of the macrophytes, while pelagic bacterial production increased. However, the relative distribution of flows from dissolved carbon changed only slightly with increasing dose rates, which mainly triggered an increase in the amount of carbon dissipated through respiration. Finally, in Paper IV, higher temperatures induced the release of PO4 from the sediment, which stimulated the growth of the cyanobacteria, in turn leading to an increase in copepod abundance.
These results demonstrate that the effects of contaminants on ecosystems depend on ecological processes, which may influence speciesspecific responses and lead to indirect effects. This thesis builds on a body of literature calling for a more holistic approach of ecotoxicology and radioecology, where ecosystem level responses to contaminants are taken into consideration.

Modeling framework for ageing of low alloy steel
Ageing of low alloy steel in nuclear applications commonly takes the form as a hardening and an embrittlement of the material. This is due to the evolution of the microstructure during irradiation and at purely thermal conditions, as a combination or separate. Irradiation introduces evenly distributed solute clusters, while thermal ageing has been shown to yield a more inhomogeneous distribution. These clusters affect the dislocation motion within the material and results in a hardening and in more severe cases of ageing, also a decreased work hardening slope due to plastic strain localization into bands/channels. Embrittlement corresponds to decreased fracture toughness due to microstructural changes resulting from ageing. The thesis presents a possible framework for modeling of ageing effects in low alloy steels.In Paper I, a strain gradient plasticity framework is applied in order to capture length scale effects. The constitutive length scale is assumed to be related to the dislocation mean free path and the changes this undergoes during plastic deformation. Several evolution laws for the length scale were developed and implemented in a FEMcode considering 2D plane strain. This was used to solve a test problem of pure bending in order to investigate the effects of the length scale evolution. As all length scale evolution laws considered in this study results in a decreasing length scale; this leads to a loss of nonlocality which causes an overall softening at cases where the strain gradient is dominating the solution. The results are in tentative agreement with phenomena of strain localization that is occurring in highly irradiated materials.In Paper II, the scalar stress measure for cleavage fracture is developed and generalized, here called the effective normal stress measure. This is used in a nonlocal weakest link model which is applied to two datasets from the literature in order to study the effects of the effective normal stress measure, as well as new experiments considering fourpoint bending of specimens containing a semielliptical surface crack. The model is shown to reproduce the failure probability of all considered datasets, i.e. well capable of transferring toughness information between different geometries.

Designing tools for conviviality : A design led exploration of Participatory Activity Mapping
This thesis is a report of research work that contributes to the understanding of socalled convivial tools. It does this by describing how small enterprises use Participatory Activity Mapping as an approach to changing, as well as caring about, people and the things that hold their work situations together. Working on this thesis I observed that small enterprises and their employees function in complex and heterogeneous work environments without having the tools or routines to make presentations of how the different aspects of their work situation are held together. In this thesis such tools are described as convivial tools, that is tools that can be used by people to create things, express their own tastes and caring for others. Over 15 different Participatory Activity Mapping events were conducted during the period of research. The following research questions were put: What are the potentialities of using Participatory Activity Mapping as a convivial tool? How does Participatory Activity Mapping aid the processes of designing product propositions? and How does Participatory Activity Mapping assist small enterprises in creating conviviality. A methodological and theoretical triangulation was used, together with a practicebased and designled generative design approach, to advance the inquiry into the potentialities of using Participatory Activity Mapping as a convivial tool. The investigation revealed that knowledge is not created from a single vision: on the contrary it is partial and pluralistic. Participatory Activity Mapping supports a situated approach, where the mapmakers cocreate their own versions of their own situation together with versions and positions from other people and things. In this sense Participatory Activity Mapping is about helping the mapmakers to cocreate topological propositions and see relations within their own practice in order to craft new relational patterns. In addition, the study presents different mapping situations as examples and guidance for how the design field can be sensitive to mapping aspects that show strategies for othering, making absent actors present and tellable otherness. The conclusion of this thesis is that future design researchers and future design practitioners should consider shifting their focus from creating product propositions to creating convivial tools that support people in their efforts to enrich their environment with the fruits of their own vision. This could help design practitioners to involve the space inbetween and change current design tools, such as service blueprints, into something that is much more heterogeneous, decentralized, messy and involving.

Asynchronous FirstOrder Algorithms for LargeScale Optimization : Analysis and Implementation
Developments in communication and data storage technologies have made largescale data collection more accessible than ever. The transformation of this data into insight or decisions typically involves solving numerical optimization problems. As the data volumes increase, the optimization problems grow so large that they can no longer be solved on a single computer. This has created a strong interest in developing optimization algorithms that can be executed efficiently on multiple computing nodes in parallel. One way to achieve efficiency in parallel computations is to allow for asynchrony among nodes, which corresponds to making the nodes spend less time coordinating with each other and more time computing, possibly based on delayed information. However, asynchrony in optimization algorithms runs the risk of otherwise convergent algorithms divergent, and convergence analysis of asynchronous algorithms is generally harder. In the thesis, we develop theory and tools to help understand and implement asynchronous optimization algorithms under timevarying, bounded information delay.
In the first part, we analyze the convergence of different asynchronous optimization algorithms. We first propose a new approach for minimizing the average of a large number of smooth component functions. The algorithm uses delayed partial gradient information, and it covers delayed incremental gradient and delayed coordinate descent algorithms as special cases. We show that when the total loss function is strongly convex and the component functions have Lipschitzcontinuous gradients, the algorithm has a linear convergence rate. The step size of the algorithm can be selected without knowing the bound on the delay, and still, guarantees convergence to within a predefined level of suboptimality. Then, we analyze two different variants of incremental gradient descent algorithms for regularized optimization problems. In the first variant, asynchronous minibatching, we consider solving regularized stochastic optimization problems with smooth loss functions. We show that the algorithm with timevarying step sizes achieves the bestknown convergence rates under synchronous operation when (i) the feasible set is compact or (ii) the regularization function is strongly convex, and the feasible set is closed and convex. This means that the delays have an asymptotically negligible effect on the convergence, and we can expect speedups when using asynchronous computations. In the second variant, proximal incremental aggregated gradient, we show that when the objective function is strongly convex, the algorithm with a constant step size that depends on the maximum delay bound and the problem parameters converges globally linearly to the true optimum.
In the second part, we first present POLO, an opensource C++ library that focuses on algorithm development. We use the policybased design approach to decompose the proximal gradient algorithm family into its essential policies. This helps us handle combinatorially increasing design choices with linearly many tools, and generates highly efficient code with small footprint. Together with its sister library in Julia, POLO.jl, our software framework helps optimization and machinelearning researchers to quickly prototype their ideas, benchmark them against the stateoftheart, and ultimately deploy the algorithms on different computing platforms in just a few lines of code. Then, using the utilities of our software framework, we build a new, ``serverless'' executor for parallel Alternating Direction Method of Multipliers (ADMM) iterations. We use Amazon Web Services' Lambda functions as the computing nodes, and we observe speedups up to 256 workers and efficiencies above 70% up to 64 workers. These preliminary results suggest that serverless runtimes, together with their availability and elasticity, are promising candidates for scaling the performance of distributed optimization algorithms.

Tomographic studies of the 21cm signal during reionization : Going beyond the power spectrum
The formation of the first luminous sources in the Universe, such as the first generation of stars and accreting black holes, led to the ionization of hydrogen gas present in the intergalactic medium (IGM). This period in which the Universe transitioned from a cold and neutral state to a predominantly hot and ionized state is known as the Epoch of Reionization (EoR). The EoR is one of the least understood epochs in the Universe's evolution mostly due to the lack of direct observations. We can probe the reionization process with the 21cm signal, produced by the spinflip transition in neutral hydrogen. However, current radio telescopes have not been able to detect this faint signal. The lowfrequency component of the Square Kilometre Array (SKALow), will be sensitive enough not only to detect the 21cm signal produced during EoR but also to produce images of its distribution on the sky. A sequence of such 21cm images from different redshifts will constitute a threedimensional, tomographic, data set. Before the SKA comes online, it is prudent to develop methods to analyse these tomographic images in a statistical sense. In this thesis, we study the prospect of understanding the EoR using such tomographic analysis methods. In Paper I, II and V, we use simulated 21cm data sets to investigate methods to extract and interpret information from those images. We implement a new image segmentation technique, known as superpixels, to identify ionized regions in the images and find that it performs better than previously proposed methods. Once we have identified the ionized regions (also known as bubbles), we can determine the bubble size distribution (BSD) using various size finding algorithms and use the BSDs as a summary statistics of the 21cm signal during reionization. We also investigate the impact of different line of sight effects, such as lightcone effect and redshift space distortions on the measured BSDs. During the late stages of reionization, the BSDs become less informative since most of the IGM has become ionized. We therefore propose to study the neutral regions (also known as islands) during these late times. In Paper V, we find that most neutral islands will be relatively easy to detect with SKALow as they remain quite large until the end of reionization and their size distribution depends on the properties of the sources of reionization. Previous studies have shown that the 21cm signal is highly nonGaussian. Therefore the power spectrum cannot characterize the signal completely. In Paper III and IV, we use the bispectrum, a higherorder statistics related to the threepoint correlation function, to characterize the signal. In Paper III, we probe the nonGaussianity in the 21cm signal caused by temperature fluctuations due to the presence of XRay sources. We find that the evolution of the normalized bispectrum is different from that of the power spectrum, which is useful for breaking the degeneracy between models which use different types of XRay sources. We also show that the 21cm bispectrum can be constructed from observations with SKALow. Paper IV presents a fast and simple method to study the socalled squeezed limit version of the bispectrum, which describes how the smallscale fluctuations respond to the largescale environment. We show that this quantity evolves during reionization and differs between different reionization scenarios.

Design of soundproof panels via metamaterial concept
The goal of the work is to find a way to improve the sound insulation properties of different types of panels in order to meet different requirements. Inspired by the nontrivial behavior of the locally resonant acoustic metamaterials, this concept is introduced into the design of structures in order to explore the potential ways to improve the sound insulation behavior in the relevant specific frequency regions. At relatively low frequency region when the bending wavelength is much longer than the distance between isolated resonators, which is also the interesting frequency range in the most part of the work, it may be assumed that the effects of the resonators are uniformly distributed over the entire surface. An impedance approach is hence proposed to estimate the sound transmission loss of the metamaterial panels in order to get more insights from physics. This is realized, in general, by integrating the equivalent impedance of the resonators together with the corresponding impedance of the host panel. Valuable theories are derived based on that, laying a solid foundation for effective/efficient design of metamaterial panels. This approach also provides a fast and reliable tool for the designs prior to a timeconsuming and computationally expensive numerical simulation. Based on that, a new design for locally resonant metamaterial sandwich plates is proposed to improve the sound transmission loss performance in the coincidence frequency region. A systematic method to tune the resonance frequency of local resonators is developed. This approach also supplies a method to remove the possible sidedips associated with the resonance of the resonators. The influence of the sound radiation from the resonators is further investigated with the Finite Element models. It is proposed to embed the resonators inside the core material in order to eliminate the possible influence, and also to make a smooth surface. The metamaterial sandwich panel designed in this way combines improved acoustic insulation properties with the lightweight nature of the sandwich panel. Besides the coincidence frequency region, the ring frequency area of a cylindrical shell is another important frequency region for bad sound transmission loss. The effectiveness of locally resonant metamaterial is also investigated. Similar to the case of the flat panel, both impedance model and Finite Element model are developed for the problem of the sound transmission loss properties. The influence of the resonators is presented, and compared with the case of the flat panel. Unlike the case of the metamaterial flat panel, two sidedips around the sharp improvement cannot be avoided when applying the resonators near the ring frequency of the curved panel. The reason for that is explored by using the impedance approach. It is noticed that, while the impedance of a flat panel near the critical frequency is shifted from a masstype impedance to stiffnesstype impedance, the impedance of a cylindrical shell is shifted from a stiffnesstype (tensiontype) impedance to masstype iv impedance. For a traditional massspring type resonator, however, the equivalent impedance is always shifted from a masstype impedance to stiffnesstype impedance when the frequency crosses the resonance frequency. Therefore, when the traditional resonators are applied near the ring frequency, there are always frequencies at which the impedances cancel each other, resulting in the worsened sound transmission loss. In order to have better improvement of the sound transmission loss in this frequency region, new types of resonators have to be developed. A locally resonant metamaterial curved double wall is proposed and studied, with the aim of addressing the massspringmass resonance and ring frequency effects of the wall. The sound transmission loss properties of a curved double wall are first investigated by introducing the concept of ‘apparent impedance’, which expresses the properties of the entire structure in terms of the impedances of the constituting panels and air cavity. The apparent impedance derivation is validated against Finite Element models. The curved double wall is then specifically designed by adjusting the two characteristic frequencies to be close to each other in order to narrow the region associated with a poor transmission loss. This enables, subsequently, to improve the transmission loss in this region by effectively inserting tuned local resonators. The design principles are discussed, and applications for double walls consisting the same curved panels or different curved panels are both included.

Investigating subphotospheric dissipation in gammaray bursts by fitting a physical model