Next: Refereed conference proceedings
Up: Publications
Previous: Book chapters
Contents
Journal articles
- Role of Fibre and Fibre-Matrix Adhesion in Stress Transfer in Composites made from Resin-Impregnated Paper Sheets
Authors: K.M. Almgren (1), E.K. Gamstedt (2), P. Nygård (3), Filip Malmberg, Joakim Lindblad, M. Lindström (1)
(1) STFI-Packforsk AB
(2) Dept. of Polymer and Fibre Technology, Royal Institute of Technology
(3) FI Paper and Fibre Research Institute
Journal: International Journal of Adhesion and Adhesives 29(5), pp. 551-557
Abstract: Paper-reinforced plastics are gaining increased interest as packaging materials, where mechanical properties are of great importance. Strength and stress transfer in paper sheets are controlled by fibre bonds. In paper-reinforced plastics, where the sheet is impregnated with a polymer resin, other stress-transfer mechanisms may be more important. The influence of fibre-fibre bonds on the strength of paper-reinforced plastics was therefore investigated. Paper sheets with different degrees of fibre-fibre bonding were manufactured and used as reinforcement in a polymeric matrix. Image analysis tools were used to verify that the difference in the degree of fibre-fibre bonding had been preserved in the composite materials. Strength and stiffness of the composites were experimentally determined and showed no correlation to the degree of fibre-fibre bonding, in contrast to the behaviour of unimpregnated paper sheets. The degree of fibre-fibre bonding is therefore believed to have little importance in this type of material, where stress is mainly transferred through the fibre-matrix interface.
- Vectorized Table Driven Algorithms for Double Precision Elementary Functions Using Taylor Expansions
Authors: T. Barrera, D. Spångberg (1), Anders Hast and Ewert Bengtsson
(1) Dept. of Materials Chemistry, UU
Journal: Journal of Applied Mathematics 2(3), pp. 171-188
Abstract: This paper presents fast implementations of the inverse square root and arcsine,
both in double precision. In single precision it is often possible to use a small table
and one ordinary Newton-Raphson iteration to compute elementary functions such as the
square root. In double precision a substantially larger table is necessary to obtain the
desired precision, or, if a smaller table is used, the additional Newton-Raphson iterations
required to obtain the precision often requires the evaluation of other expensive elementary
functions. Furthermore, large tables use a lot of the cash memory that should have
been used for the application code.
Obtaining the desired precision using a small table can instead be realised by using a
higher order method than the second order Newton-Raphson method. A generalization
of Newton's method to higher order is Householder's method, which unfortunately often
results in very complicated expressions requiring many multiplications, additions, and even
divisions.
We show how a high-order method can be used, which only requires a few extra additions
and multiplications for each degree of higher order. The method starts from the Taylor
expansion of the difference of the value of the elementary function and a starting guess
value for each iteration. If the Taylor series is truncated after the second term, ordinary
Newton iterations are obtained. In several cases it is possible to algebraically simplify the
difference between the true value and the starting guess value. In those cases we show that
it is advantageous to use the Taylor series to higher order to obtain the fast convergent
method. Moreover, we will show how the coefficients of a Chebyshev polynomial can be
fitted to give as little error as possible for the functions close to zero and in the same time
reduce the terms in the Taylor expansion.
In the paper we benchmark two example implementations of the method on the x86 64
architecture. The first is the inverse square root, where the actual table (to 12 bit precision)
is provided by the processor hardware. The inverse square root is important in many
application programs, including computer graphics, and explicit particle simulation codes,
for instance the Monte Carlo and Molecular Dynamics methods of statistical mechanics.
- A Single Molecule Array for Digital Targeted Molecular Analyses
Authors: Jenny Göransson (1), Carolina Wählby, Magnus Isacsson (1), Mathias Howell (1), Jonas Javius (1) and Mats Nilsson (1)
(1) Dept. of Genetics and Pathology, UU
Journal: Nucleic Acids Research 37(1), electronic publication, published online 081125
Abstract: We present a new random array format together with a decoding scheme for targeted multiplex digital molecular analyses. DNA samples are analyzed using multiplex sets of padlock or selector probes that create circular DNA molecules upon target recognition. The circularized DNA molecules are amplified through rolling-circle amplification (RCA) to generate amplified single molecules (ASMs). A random array is generated by immobilizing all ASMs on a microscopy glass slide. The ASMs are identified and counted through serial hybridizations of small sets of tag probes, according to a combinatorial decoding scheme. We show that random array format permits at least 10 iterations of hybridization, imaging and dehybridization, a process required for the combinatorial decoding scheme. We further investigated the quantitative dynamic range and precision of the random array format. Finally, as a demonstration, the decoding scheme was applied for multiplex quantitative analysis of genomic loci in samples having verified copy-number variations. Of 31 analyzed loci, all but one were correctly identified and responded according to the known copy-number variations. The decoding strategy is generic in that the target can be any biomolecule which has been encoded into a DNA circle via a molecular probing reaction.
- Image Analysis For Quantifying Microvessel Density in Renal Cell Carcinoma
Author: Hyun-Ju Choi
Journal: Journal of Korea Society of Medical Informatics, 2(15), pp, 217-225
Abstract: The most widely used method for quantifying new blood vessel growth in tumor angiogenesis is the determination of microvessel density, which is reported to be associated with tumor progression and metastasis, and a prognostic indicator of patient outcome. In this study, we propose a method for the determination of microvessel density by image analysis, to improve the accuracy and the objectivity of determination of the microvessel density. Four-micron-thick tissue sections of renal cell carcinoma samples were stained immunohistochemically for CD34. The regions with a high degree of vascularization were selected by an expert for digitization. Each image was digitized as a -bits/pixel image file with a resolution of pixels. First, segmentation of the microvessels based on pixel classification using color features in hybrid color space was performed. After use of a correction process for microvessels with discontinuities and separation of touching microvessels, we counted the number of microvessels for the microvessel density measurement. The result was evaluated by comparison with manual quantification of the same images. The comparison revealed that our computerized microvessel quantification was highly correlated with manual counting by a pathologist. The results indicate that our method is better than the conventional computerized image analysis methods.
- Quantification of Colocalization and Cross-Talk Based on Spectral Angles
Authors: Milan Gavrilovic and Carolina Wählby
Journal: Journal of Microscopy, 234(3), pp. 311-324
Abstract: Common methods for quantification of colocalization in fluorescence microscopy typically require cross-talk free images or images where cross-talk has been eliminated by image processing, as they are based on intensity thresholding. Quantification of colocalization includes not only calculating a global measure of the degree of colocalization within an image, but also a classification of each image pixel as showing colocalized signals or not. In this paper, we present a novel, automated method for quantification of colocalization and classification of image pixels. The method, referred to as SpecDec, is based on an algorithm for spectral decomposition of multispectral data borrowed from the field of remote sensing. Pixels are classified based on hue rather than intensity. The hue distribution is presented as a histogram created by a series of steps that compensate for the quantization noise always present in digital image data, and classification rules are thereafter based on the shape of the angle histogram. Detection of colocalized signals is thus only dependent on the hue, making it possible to classify also low-intensity objects, and decoupling image segmentation from detection of colocalization. Cross-talk will show up as shifts of the peaks of the histogram, and thus a shift of the classification rules, making the method essentially insensitive to cross-talk. The method can also be used to quantify and compensate for cross-talk, independent of the microscope hardware.
- Different Levels of 3D: An Evaluation of Visualized Discrete Spatiotemporal Data in Space-Time Cubes
Authors: Andreas Kjellin (1), Lars Pettersson, Stefan Seipel and Mats Lind (1)
(1) Human-Computer Interaction, UU
Journal: Information Visualization, electronic publication, published online 091106
Abstract: New technologies and techniques allow novel kinds of visualizations and different types of 3D visualizations are constantly developed. We propose a categorization of 3D visualizations and, based on this categorization, evaluate two versions of a space-time cube that show discrete spatiotemporal data. The two visualization techniques used are a head-tracked stereoscopic visualization ('strong 3D') and a static monocular visualization ('weak 3D'). In terms of effectiveness and efficiency the weak 3D visualization is as good as the strong 3D and thus the need for advanced 3D visualizations in these kinds of tasks may not be necessary.
- Myonuclear Domain Size and Myosin Isoform Expression in Muscle Fibres from Mammals Representing a 100 000-Fold Difference in Body Size
Author: Jing-Xia Liu (1), Anna-Stina Höglund (1), Patrick Karlsson, Joakim Lindblad, Rizwan Qaisar (1), Sudhakar Aare (1), Ewert Bengsson and Lars Larsson (1)
(1) Dept. of Neuroscience, UU
Journal: Experimental Physiology, 94(1), pp. 117-129
Abstract: This comparative study of myonuclear domain (MND) size in mammalian species representing a 100 000-fold difference in body mass, ranging from 25 g to 2500 kg, was undertaken to improve our understanding of myonuclear organization in skeletal muscle fibres. Myonuclear domain size was calculated from three-dimensional reconstructions in a total of 235 single muscle fibre segments at a fixed sarcomere length. Irrespective of species, the largest MND size was observed in muscle fibres expressing fast myosin heavy chain (MyHC) isoforms, but in the two smallest mammalian species studied (mouse and rat), MND size was not larger in the fast-twitch fibres expressing the IIA MyHC isofom than in the slow-twitch type I fibres. In the larger mammals, the type I fibres always had the smallest average MND size, but contrary to mouse and rat muscles, type IIA fibres had lower mitochondrial enzyme activities than type I fibres. Myonuclear domain size was highly dependent on body mass in the two muscle fibre types expressed in all species, i.e. types I and IIA. Myonuclear domain size increased in muscle fibres expressing both the /slow (type I; r= 0.84, P < 0.001) and the fast IIA MyHC isoform (r= 0.90; P < 0.001). Thus, MND size scales with body size and is highly dependent on muscle fibre type, independent of species. However, myosin isoform expression is not the sole protein determining MND size, and other protein systems, such as mitochondrial proteins, may be equally or more important determinants of MND size.
- Neighborhood Sequences in the Diamond Grid: Algorithms with Two and Three Neighbors
Authors: Benedek Nagy (1) and Robin Strand
(1) Faculty of Informatics, University of Debrecen, Debrecen, Hungary
Journal: International Journal of Imaging Systems and Technology 19(29), pp. 146-157
Abstract: In the digital image processing, digital distances are useful; distances based on neighborhood sequences are widely used. In this article, the diamond grid is considered, that is, the three-dimensional grid of carbon atoms in the diamond crystal. An algorithm to compute a shortest path defined by a neighborhood sequence between any two points in the diamond grid is presented. A formula to compute the distance based on neighborhood sequences with two neighborhood relations is given. The metric and nonmetric properties of some distances based on neighborhood sequences are also discussed. Finally, the constrained distance transformation is shown.
- Fully Automatic Heart Beat Rate Determination in Digital Video Recordings of Rat Embryos
Authors: Muhammad Khalid Khan Niazi, Mats Nilsson (1), Bengt R Danielsson (1), Ewert Bengsson
(1) Div. of Toxicology, UU
Journal: Transactions on Mass-Data Analysis of Images and Signals 1(2), pp. 132-146
Abstract: Embryo cultures of rodents is an established technique for monitoring adverse effects of chemicals on embryonic development. The assessment involves determination of the heart rate of the embryo which is usually done visually, a technique which is tedious and error prone. We present a new method for fully automatic heart detection in digital videos of rat embryos. First it detects the heart location by using decimation free directional filter bank along with first absolute moment, and then it counts the number of heart beats for a predetermined period of time. Using this automated method many more embryos can be evaluated at reasonable cost.
- A Detailed Analysis of 3D Subcellular Signal Localization
Authors: Amalka Pinidiyaarachchi, Agata Zieba (1), Amin Allalou, Katerina Pardali (1) and Carolina Wählby
(1) Dept. of Genetics and Pathology, UU
Journal: Cytometry part A, 75A(4), pp. 319-328
Abstract: Detection and localization of fluorescent signals in relation to other subcellular structures is an important task in various biological studies. Many methods for analysis of fluorescence microscopy image data are limited to 2D. As cells are in fact 3D structures, there is a growing need for robust methods for analysis of 3D data. This article presents an approach for detecting point-like fluorescent signals and analyzing their subnuclear position. Cell nuclei are delineated using marker-controlled (seeded) 3D watershed segmentation. User-defined object and background seeds are given as input, and gradient information defines merging and splitting criteria. Point-like signals are detected using a modified stable wave detector and localized in relation to the nuclear membrane using distance shells. The method was applied to a set of biological data studying the localization of Smad2-Smad4 protein complexes in relation to the nuclear membrane. Smad complexes appear as early as 1 min after stimulation while the highest signal concentration is observed 45 min after stimulation, followed by a concentration decrease. The robust 3D signal detection and concentration measures obtained using the proposed method agree with previous observations while also revealing new information regarding the complex formation.
- Performance of Principal Component Analysis and Independent Component Analysis with Respect to Signal Extraction from Noisy Positron Emission Tomography Data: a Study on Computer Simulated Images
Authors: Pasha Razifar (1), Hamed Hamid Muhammed (2), Fredrik Engbrant, Per-Edvin Svensson, Johan Olsson, Ewert Bengtsson, Bengt Långström (3) and Mats Bergström (4)
(1) Uppsala Applied Science Laboratory, GE Healthcare
(2) School of Technology and Health, KTH
(3) Uppsala Imanet, GE Healthcare
(4) UU, Dept. of Pharmaceutical Biosciences
Journal: Open Neuroimaging Journal 1(3), pp. 1-16
Abstract: Multivariate image analysis tools are used for analyzing dynamic or multidimensional Positron Emission Tomography, PET data with the aim of noise reduction, dimension reduction and signal separation. Principal Component Analysis is one of the most commonly used multivariate image analysis tools, applied on dynamic PET data. Independent Component Analysis is another multivariate image analysis tool used to extract and separate signals. Because of the presence of high and variable noise levels and correlation in the different PET images which may confound the multivariate analysis, it is essential to explore and investigate different types of pre-normalization (transformation) methods that need to be applied, prior to application of these tools. In this study, we explored the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) to extract signals and reduce noise, thereby increasing the Signal to Noise Ratio (SNR) in a dynamic sequence of PET images, where the features of the noise are different compared with some other medical imaging techniques. Applications on computer simulated PET images were explored and compared. Application of PCA generated relatively similar results, with some minor differences, on the images with different noise characteristics. However, clear differences were seen with respect to the type of pre-normalization. ICA on images normalized using two types of normalization methods also seemed to perform relatively well but did not reach the improvement in SNR as PCA. Furthermore ICA seems to have a tendency under some conditions to shift over information from IC1 to other independent components and to be more sensitive to the level of noise. PCA is a more stable technique than ICA and creates better results both qualitatively and quantitatively in the simulated PET images. PCA can extract the signals from the noise rather well and is not sensitive to type of noise, magnitude and correlation, when the input data are correctly handled by a proper pre-normalization. It is important to note that PCA as inherently a method to separate signal information into different components could still generate PC1 images with improved SNR as compared to mean images.
- High Precision Boundary Length Estimation by Utilizing Gray-Level Information
Authors: Natasa Sladoje (1) and Joakim Lindblad
(1) Faculty of Engineering, University of Novi Sad, Serbia
Journal: IEEE Transaction on Pattern Analysis and Machine Intelligence, 31(2)
Abstract: We present a novel method that provides an accurate and precise estimate of the length of the boundary (perimeter) of an object by taking into account gray levels on the boundary of the digitization of the same object. Assuming a model where pixel intensity is proportional to the coverage of a pixel, we show that the presented method provides error-free measurements of the length of straight boundary segments in the case of nonquantized pixel values. For a more realistic situation, where pixel values are quantized, we derive optimal estimates that minimize the maximal estimation error. We show that the estimate converges toward a correct value as the number of gray levels tends toward infinity. The method is easy to implement; we provide the complete pseudocode. Since the method utilizes only a small neighborhood, it is very easy to parallelize. We evaluate the estimator on a set of concave and convex shapes with known perimeters, digitized at increasing resolution. In addition, we provide an example of applicability of the method on real images, by suggesting appropriate preprocessing steps and presenting results of a comparison of the suggested method with other local approaches.
- Weighted Distances Based on Neighborhood Sequences for Point-Lattices
Author: Robin Strand
Journal: Discrete Applied Mathematics 157(4), pp. 641-652
Abstract: A path-based distance is defined as the minimal cost-path between two points. One such distance function is the weighted distance based on a neighborhood sequence. It can be defined using any number of neighborhood relations and weights in conjunction with a neighborhood sequence. The neighborhood sequence restricts some steps in the path to a smaller neighborhood. We give formulas for computing the point-to-point distance and conditions for metricity for weighted distances based on neighborhood sequences with two neighborhood relations for the general case of point-lattices.
- Path-Based Distance Functions in n-Dimensional Generalizations of the Face- and Body-Centered Cubic Grids
Authors: Robin Strand and Benedek Nagy (1)
(1) Faculty of Informatics, University of Debrecen, Debrecen, Hungary
Journal: Discrete Applied Mathematics 157(16), pp. 3386-3400
Abstract: Path-based distance functions are defined on n-dimensional generalizations of the face-centered cubic and body-centered cubic grids. The distance functions use both weights and neighborhood sequences. These distances share many properties with traditional path-based distance functions, such as the city-block distance, but are less rotational dependent. For the three-dimensional case, we introduce four different error functions which are used to find the optimal weights and neighborhood sequences that can be used to define the distance functions with low rotational dependency.
- Indication of an Interspecies ``Spill-Over'' Reaction in Common Swift Apus Apus
Authors: Olle Tenow (1) and Torbjörn Fagerström and Cris Luengo
(1) SLU
Journal: Ornis Svecica, 19(4). pp. 233-236
- Visual Exploration of Three-Dimensional Gene Expression using Physical Views and Linked Abstract Views
Authors: Gunther H Weber (1), Oliver Rübel (2), Min-Yu Huang (3), Angela H DePace (4), Charless C Fowlkes (5), Soile V E Keränen (1), Cris L Luengo Hendriks, Hans Hagen (2), David W Knowles (1), Jitendra Malik (6), Mark D Biggin (1) and Bernd Hamann (3)
(1) Lawrence Berkeley National Laboratory, Berkeley, CA
(2) Dept. of Computer Science, University of Kaiserslauterny
(3) Dept. of Computer Science, University of California, Davis
(4) Dept. of Systems Biology, Harvard Medical School
(5) Dept. of Computer Science, Donald Bren School of Information and Computer Sciences, University of California, Irvine
(6) Computer Science Division, University of California, Berkeley
Journal: IEEE/ACM Transactions on Computational Biology & Bioinformatics, 6(2), pp. 296-309
Abstract: During animal development, complex patterns of gene expression provide positional information within the embryo. To better understand the underlying gene regulatory networks, the Berkeley Drosophila Transcription Network Project (BDTNP) has developed methods that support quantitative computational analysis of three-dimensional (3D) gene expression in early Drosophila embryos at cellular resolution. We introduce PointCloudXplore (PCX), an interactive visualization tool that supports visual exploration of relationships between different genes' expression using a combination of established visualization techniques. Two aspects of gene expression are of particular interest: 1) gene expression patterns defined by the spatial locations of cells expressing a gene and 2) relationships between the expression levels of multiple genes. PCX provides users with two corresponding classes of data views: 1) Physical Views based on the spatial relationships of cells in the embryo and 2) Abstract Views that discard spatial information and plot expression levels of multiple genes with respect to each other. Cell Selectors highlight data associated with subsets of embryo cells within a View. Using linking, these selected cells can be viewed in multiple representations. We describe PCX as a 3D gene expression visualization tool and provide examples of how it has been used by BDTNP biologists to generate new hypotheses.
Next: Refereed conference proceedings
Up: Publications
Previous: Book chapters
Contents