High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Guanglin Chen et al 2024 Meas. Sci. Technol. 35 086202
Multi rotor unmanned aerial vehicles (UAVs) are extensively utilized across various domains, and the motor constitutes a pivotal element in the UAV power system. The majority of UAV failures and crashes stem from motor malfunctions, underscoring the imperative need for comprehensive research on fault diagnosis in UAV motors to ensure the stable and reliable execution of flight tasks. This study focuses on quadrotor UAVs as the research subject and devises targeted fault simulation experiments based on the structural features and operational characteristics of the DC brushless motor used in quadrotor UAVs, specifically examining the stator, rotor, and bearings. To address challenges related to the UAV's own loads, limited space for redundant parts, and the high cost and difficulty associated with installing sensors for traditional fault diagnostic signals such as vibration and temperature, this study opts to use current signals as a substitute. This approach resolves the issue of challenging data collection for UAVs and investigates a current signal based fault diagnosis method for UAV motors. Lastly, in response to the limited training samples available for fault data due to the UAV's highly sensitive characteristics regarding the health status of its components and flight stability, traditional machine learning and deep learning methods encounter difficulties in identifying representative features with a small number of training samples, leading to the risk of overfitting and reduced model accuracy in fault diagnosis. To overcome this challenge, we propose a hybrid neural network fault diagnosis model that incorporates a width learning system and a convolutional neural network (CNN). The width learning system eliminates temporal characteristics from the original current signal, capturing more comprehensive and representative sample features in the width feature space. Subsequently, the CNN is employed for feature extraction and classification tasks. In empirical small sample fault diagnosis experiments using current signal data for UAV motors, our proposed model outperforms other models used for comparison.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Jaqueline Stauffenberg et al 2024 Meas. Sci. Technol. 35 085011
This paper explores large area application of tip-based nanofabrication by field emission scanning probe lithography and showcases the simultaneous possibility of atomic force microscopy on macroscopic scales. This is made possible by the combination of tip-based technology and a planar nanopositioning and nanomeasuring machine. Using long range atomic force microscopy measurement of regular grating structures, the performance of the machine is thoroughly characterized over the full 100 mm range of motion of the positioning machine, which was confirmed by repeated measurements. After initially focussing on achieving the minimum line width of 40 nm in microscopic areas, a grating with a pitch of 1 μm is additionally fabricated over a total length of 10 mm, whereby the dimensions and deviations are also considered.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Gustavo Quino et al 2021 Meas. Sci. Technol. 32 015203
Digital image correlation (DIC) is a widely used technique in experimental mechanics for full field measurement of displacements and strains. The subset matching based DIC requires surfaces containing a random pattern. Even though there are several techniques to create random speckle patterns, their applicability is still limited. For instance, traditional methods such as airbrush painting are not suitable in the following challenging scenarios: (i) when time available to produce the speckle pattern is limited and (ii) when dynamic loading conditions trigger peeling of the pattern. The development and application of some novel techniques to address these situations is presented in this paper. The developed techniques make use of commercially available materials such as temporary tattoo paper, adhesives and stamp kits. The presented techniques are shown to be quick, repeatable, consistent and stable even under impact loads and large deformations. Additionally, they offer the possibility to optimise and customise the speckle pattern. The speckling techniques presented in the paper are also versatile and can be quickly applied in a variety of materials.
Bartosz Pruchnik et al 2024 Meas. Sci. Technol. 35 085901
Scanning probe microscopy (SPM) is a broad family of diagnostic methods. Common restraint of SPM is only surficial interaction with specimen, especially troublesome in case of complex volumetric systems, e.g. microbial or microelectronic. Scanning thermal microscopy (SThM) overcomes that constraint, since thermal information is collected from broader space. We present transformer bridge-based setup for resistive nanoprobe-based microscopy. With low-frequency (LF) (approx. 1 kHz) detection signal bridge resolution becomes independent on parasitic capacitances present in the measurement setup. We present characterisation of the setup and metrological description—with resolution of the system 0.6 mK with sensitivity as low as 5 mV K−1. Transformer bridge setup brings galvanic separation, enabling measurements in various environments, pursued for purposes of molecular biology. We present results SThM measurement results of high-thermal contrast sample of carbon fibres in an epoxy resin. Finally, we analyse influence of thermal imaging on topography imaging in terms of information channel capacity. We state that transformer bridge-based SThM system is a fully functional design along with low driving frequencies and resistive thermal nanoprobes by Kelvin Nanotechnology.
Open all abstracts, in this tab
Ru Bai et al 2024 Meas. Sci. Technol. 35 085119
In this paper, we propose and design a novel dual-range tunnel magnetoresistance (TMR) current sensor with a single magnetic ring structure. This design incorporates two distinct magnetic guiding effects, namely magnetic shunt and magnetic aggregation, within the same magnetic ring. By integrating a high-sensitivity TMR sensor chip with a closed-loop feedback circuit, we achieve a TMR current sensor with excellent linearity, high resolution, as well as high frequency response. The magnetic ring structure is first modeled and simulated, establishing a correlation between the distribution of magnetic induction intensity and the parameters of the magnetic ring and feedback coils. Through simulation optimization and theoretical calculations, we determine the optimal positions for TMR sensor chips in the magnetic ring, suitable for both current ranges. When a signal current is present, the TMR sensor chip generates a weak differential voltage signal, which is subsequently amplified, processed, and automatically transmitted to the laptop via a serial port. Furthermore, the sensor allows for automatic switching between the two current ranges. The results demonstrate that our designed dual-range current sensor exhibits outstanding performance characteristics, including a high resolution of 500 μA in the small range, accuracy of 0.10%, excellent linearity of 0.011%, and a fast frequency response of 500 kHz. These features make it highly applicable in various fields such as new energy vehicles and smart grids, indicating promising prospects for its widespread utilization.
Xiang Xiong et al 2024 Meas. Sci. Technol. 35 085024
Wooden plank images in industrial measurements often contain numerous textureless areas. Furthermore, due to the thin plate structure, the three-dimensional (3D) disparity of these planks is predominantly confined to a narrow range. Consequently, achieving accurate 3D matching of wooden plank images has consistently presented a challenging task within the industry. In recent years, deep learning has progressively supplanted traditional stereo matching methods due to its inherent advantages, including rapid inference and end-to-end processing. Nonetheless, the acquisition of datasets for stereo matching networks poses an additional challenge, primarily attributable to the difficulty in obtaining accurate disparity data. Thus, this paper presents a novel stereo matching method incorporating three key innovations. Firstly, an enhanced gated recurrent unit network is introduced, accompanied by a redesigned structure to achieve higher matching accuracy. Secondly, an efficient preprocessing module is proposed, aimed at improving the algorithm's efficiency. Lastly, in response to the challenges posed by datasets acquisition, we innovatively employed image simulation software to obtain a high-quality simulated dataset of wooden planks. To assess the feasibility of our approach, we conducted both simulated and real experiments. The experiments results clearly exhibit the superiority of our method when compared to existing approaches in terms of both stability and accuracy. In the simulation experiment, our method attained a bad1.0 score of 2.1% (compared to the baseline method's 9.76%); In the real experiment, our method achieved an average error of 0.104 mm (compared to the baseline method's 0.268 mm). It is worth noting that our study aims to address the challenge of acquiring datasets for deep learning and bridging the gap between simulated and real data, resulting in increased applicability of deep learning in more industrial measurement domains.
Meng Ma et al 2024 Meas. Sci. Technol. 35 086144
The improvement of reliable health monitoring system for liquid-propellant rocket engines (LREs) is a crucial part for reusable launch vehicle, which contributes to providing competitive and cost-effective propulsion systems. Thus, it accentuates the need for reliable and quick health stage assessment of system and follow-up damage-mitigating control. In this paper, we propose a novel adaptive physics-encoded graph neural network for health stage assessment of LREs. Our approach embeds the relations of different sensors obtained through expert experience, which contributes to constructing a physical graph layer. To better capture the information contained in all the sensor data, a novel convolutional layer of adaptive auto-regressive moving average filters is designed, which considers the personalized information propagation needs of each neural network layer. The performance of the proposed method is quantified with data obtained from physics simulations and real-world engineering systems. The results show that our model has potential applicability for the health stage assessment of LREs with high accuracy.
Yingxin Luan et al 2024 Meas. Sci. Technol. 35 086143
This paper focuses on the localization problem of dynamic impacts that can lead to significant damages on wind turbine blades (WTBs). Localization of dynamic impacts on WTBs is essential for wind turbines due to their vulnerability to dynamic impacts such as birds, stones, hails. The proposed deep learning methodology contributes to accurately locate the impacted blade and specific position using the measurements from a limited number of sensors. In particular, a novel hierarchical adaptive selection neural network is proposed, which integrates a classification subnetwork and a regression subnetwork. Specifically, an adaptive blade selection mechanism is designed to determine the impacted blade for classification while an adaptive window selection mechanism is developed to highlight the representative time period for regression. By deploying a limited number of sensors to acquire measured vibration data, the proposed method can accurately identify the collision locations of transient impacts loaded on WTBs. In both simulated and real-world experiments, the proposed method achieves the mean absolute error of 0.189 centimeter and 1.088 centimeter for impact localization. The experimental results demonstrate the superior performance of the proposed model in comparison with the existing methods for localizing impulsive loads on WTBs.
Uzair Bashir et al 2024 Meas. Sci. Technol. 35 086319
This paper presents an efficient design approach for microelectromechanical systems (MEMS) gyroscope considering the design constraint of MEMSCAP's Silicon-on-Insulator Multi-User MEMS Processes (SOIMUMPs). A design for a decoupled mass MEMS resonant gyroscope with tuning electrodes is presented to minimize cross-axis coupling and optimize the gyroscope's performance. The device features a size of . A decoupled mass MEMS gyroscope has been designed using a novel mechanical beam configuration that optimizes the structural design to reduce unwanted interactions between drive and sense axes motions. Mode order has been achieved by obtaining the first two modes as drive and sense modes at 9946.3 Hz and 10 202.7 Hz, respectively. Parallel plate electrostatic tuning electrodes have been added to adjust and match the drive and sense modes at the resonant frequencies of 9946 Hz. This mode matching is crucial for optimizing gyroscope performance, accuracy, and sensitivity. The effects of temperature variation on the performance of the designed decoupled mass MEMS gyroscope have been investigated, particularly in terms of its mode-matched operation achieved using tuning electrodes. Finite element method (FEM) simulations were conducted, demonstrating that thermal stability was achieved, and mode-matched operation was maintained within the temperature range of −40 °C to 80 °C. The gyroscope exhibits enhanced performance compared to existing MEMS gyroscopes under mode-matched conditions, featuring a lower temperature coefficient of frequency (TCF) at 0.0415 Hz °C−1 in the drive mode and 0.0165 Hz °C−1 in the sense mode. FEM analysis for the fabrication process tolerances has been done, where MEMS gyroscope's resonant frequencies, output capacitance, output displacement, and sense mode quality factor have shown negligible changes to fabrication process tolerances. Transient analysis was conducted using CoventorWare MEMS+ 3D schematic, which was integrated with MATLAB Simulink to measure the MEMS gyroscope's output response, which corresponded with the varying input angular velocity signal.
Open all abstracts, in this tab
Hanlin Guan et al 2024 Meas. Sci. Technol. 35 082001
Hydraulic component faults have the characteristics of nonlinear time-varying signal, strong concealment, and difficult feature extraction, etc. Timely and accurately fault diagnosis of hydraulic components is helpful to curb economic losses and accidents, so researches have carried out a lot of research on hydraulic components. Information fusion technology can combine multi-source data from multiple dimensions to mine fault data features, which effectively improves the accuracy and reliability of fault diagnosis results. However, there is currently a lack of a comprehensive and systematic review in this domain. Therefore, in this paper, the hydraulic components information fusion fault diagnosis technologies are summarized and analyzed, encompassing the main process information fusion fault diagnosis and the research status of information fusion fault diagnosis of hydraulic system. The methods and techniques involved in the fusion process, data source and fusion method of fault diagnosis of hydraulic components information fusion are elaborated and summarized. The problems of information fusion in fault diagnosis of hydraulic components are analyzed, the solutions are discussed, and the research ideas of improving information fusion fault diagnosis are put forward. Finally, digital twin (DT) technology is introduced, and the advantages and research status of intelligent fault diagnosis based on DT are summarized. On this basis, the intelligent fault diagnosis of hydraulic components based on information fusion is summarized, and the challenges and future research ideas of applying information fusion and DT to intelligent fault diagnosis of hydraulic components are put forward and analyzed comprehensively.
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Open all abstracts, in this tab
Li et al
In practical engineering applications, the accuracy and stability of fault identification for centrifugal pump will be significantly reduced due to unbalanced distribution between normal and fault datasets, i.e., the number of normal working samples is far more than the fault samples. To alleviate this bottleneck issue, this paper explores the fault identification of centrifugal pump based on Wasserstein generative adversarial network with gradient penalty (WGAN-GP) through combining kinematics simulation and experimental case. Specifically, ideal unbalanced vibration datasets from failure patterns such as damaged impeller of centrifugal pump are simulated and collected by prototype ADAMS software, then the unbalanced vibration signals are transformed into 2D grey-scale images. Furtherly, the generated grey-scale image datasets are feed into the original grey-scale image dataset as new datasets for training when the Nash equilibrium of the WGAN-GP model is reached. Eventually, the fault patterns of centrifugal pump are identified using confusion matrix graph. Meanwhile, another public dataset of centrifugal pump is employed for verifying the accuracy of the WGAN-GP model. Results indicate that fault identification accuracies with 95.07% and 98.0% of both kinematics simulation and experimental case are obtained, respectively, and the issues of unbalanced distribution and insufficient dataset can be overcome effectively.
Song et al
This paper presents a comprehensive review of the state-of-the-art techniques for predicting the remaining useful life (RUL) of rolling bearings. Four key aspects of bearing RUL prediction are considered: data acquiring, construction of health indicators (HI), development of RUL prediction algorithms, and evaluation of prediction results. Additionally, publicly available datasets that can be used to validate bearing prediction algorithms are described. The existing RUL prediction algorithms are categorized into three types and have been comprehensively reviewed: physical-based, statistical-based, and data-driven. In particular, the progress made in data-driven prediction methods is summarized, and typical methods such as RNN, TCN, GCN, Transform, and TL-based methods are introduced in detail. Finally, the challenges faced by data-driven methods in RUL prediction for bearings are discussed.
Zhang et al
In processing signals with singular value decomposition (SVD), one of the keys lies in building an appropriate Hankel matrix from signals. To address the difficulty in extracting the feature information of rubbing faults between rotor and stator, by taking advantage of the nature of rubbing fault information closely related to the rotation period of equipment, a new method of SVD is presented according to the Hankel matrix built from the periodicity of rotation machine. Firstly, with the periodicity of rub-impact fault as the basis, the interval step size between Hankel vectors was determined to self-adaptively build a Hankel matrix of signal. Secondly, the newly-built Hankel matrix was denoised through singular value differential spectrum (SVDS). Thirdly, to reduce the loss of data as much as possible, a strategy was proposed to rebuild signals according to the first and last rows of denoised signals. Fourthly, features of rubbing faults were extracted according to the frequency spectrum of reconstructed signals and faults were identified. To verify the applicability and effectiveness of presented algorithm, various types of simulation signals and tester signals from different states were brought in; meanwhile, the presented algorithm was compared with a variety of classical methods. The result proves that the proposed method not only can effectively restrain noise interference, but also highlight fault feature information and correctly identify rub-impact faults.
Wang et al
Expressways in desert areas are prone to sand lifting and accumulation. This study aims to explore the impact of various risk factors on sand accumulation on road surfaces. Initially, the study identifies the causes of these risks through on-site investigation. Subsequently, using Fluent numerical simulation, it examines how different wind speeds, wind directions, route angles, embankment heights, embankment widths, embankment slope ratios, and central median layouts affect sand accumulation. Finally, based on simulation results and sand accumulation data from the Uma Expressway's desert section, the study evaluates the importance of these factors using ordered logistic regression analysis and proposes strategic recommendations. The findings indicate that the degree of sand accumulation increases with higher wind speeds, more significant embankment heights, and variations in wind direction, route angle, and embankment width, as well as the configuration of the central median. Wind speed and embankment height are identified as the main factors influencing sand accumulation. Based on the risk assessment, the study suggests a four-point preventive strategy: (i) implementing wind speed management measures; (ii) optimizing embankment design; (iii) developing sand prevention strategies for the central median; and (iv) adjusting the alignment of the road relative to the wind direction.
chen et al
The multi-light screen measurement systems must be calibrated before being come into use, and a calibration method using live-fire practice is generally adopted. However, this method has many shortcomings, such as high cost, high risk and poor efficiency. To effectively improve the measurement accuracy of a multi-light screen measurement system, we propose a calibration method based on array optical signal in the paper. According to the measurement principle of the system, the proposed calibration theory is systematically described. The aim is to map the given measurement results in the form of vector into the specified time sequence. Furthermore, we develop a calibration platform whose core components are an arbitrary waveform generator and a tunable laser, and thus it can generate a set of six optical signals with an exact sequential relationship. After receiving the optical signals, some measurement results with errors are obtained by the system. Many calibration operations are performed to obtain the systematic errors, which are the statistical average of the deviation between the given measurement results and those with errors. Finally, we use the proposed calibration method combined with the error tensor and the traditional one to carry out comparison experiments. The results show that the proposed calibration method is superior to the traditional method in terms of comprehensive performance. In addition, the proposed calibration method relative to the traditional ones has some advantages, such as high efficiency, safety and low cost.
Open all abstracts, in this tab
Meng Zhang 2024 Meas. Sci. Technol. 35 086140
Rolling bearing fault diagnosis is crucial for ensuring the safe and reliable operation of mechanical equipment. Detecting faults directly from measurement signals is challenging due to severe noise and interference. Blind deconvolution (BD), as a preferred method, effectively recovers periodic pulses from the measured vibration signals of faulty bearings. This study introduces a simulated annealing-based BD approach to enhance the pulse signal components reflecting faults in vibration signals measured on rolling bearings. This method iteratively searches for the optimal coordinates in a high-dimensional orthogonal optimization space, where the optimal coordinates reflect the combination of the inverse filter coefficients. Compared to the generalized spherical optimization space used in the 'Optimization-Blind Deconvolution' method in previous works, the proposed finite high-dimensional optimization space helps overcome the problem of inverse filter coefficient convergence, allowing for the design of inverse filters without limit of its shape. To better accommodate the cyclostationarity characteristics of bearing signal measured in reality, the proposed method employs a target vector that allows for uncertainty in pulse occurrence instants, thus overcomes challenges introduced by pseudo-periodic phenomena resulting from bearing slippage. Numerical simulations and experimental results on real bearing vibration signals confirm that the proposed method can design more flexible filters to enhance pulse-like patterns in signals, effectively utilize limited filter resources. Its capacity to tolerate inaccurate fault period estimates, high background noise, and pulse randomness enables it to effectively address vibration measurement signals in real-world scenarios.
Boyao Liu and William Allison 2024 Meas. Sci. Technol. 35 087001
We describe a compact constant current power supply with µA precision designed to drive coils. The unit generates currents from −125 mA to 125 mA with a load up to 10 Ω using a precision 16-bit digital to analogue converter, driven from a microcontroller (e.g. Raspberry Pi Pico). All power for the unit is derived from the 5 V of the microcontroller. As a demonstration of the capability of the power supply, it was applied to spin manipulation in a helium spin echo system.
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol. 35 085606
This study investigates the synergistic effects of cement, water, and hybrid carbon nanotubes/graphene nanoplatelets (CNT/GNP) concentrations on the mechanical, microstructural, durability, and piezoresistive properties of self-sensing cementitious geocomposites. Varied concentrations of cement (8% to 18%), water (8% to 16%), and CNT/GNP (0.1% to 0.34%, 1:1) were incorporated into cementitious stabilized sand (CSS). Mechanical characterization involved compression and flexural tests, while microstructural analysis utilized dry density, apparent porosity, water absorption, and non-destructive ultrasonic testing, alongside TGA, SEM, EDS, and x-ray diffraction analyses. The durability of the composite was also assessed against 180 Freeze-thaw cycles. Moreover, the piezoresistive behavior of the nano-reinforced CSS was analyzed during cyclic flexural and compressive loading using the four-probe method. The optimal carbon nanomaterials (CNM) content was found to depend on the water and cement ratios. Generally, elevating the water content led to a rise in the CNM optimal concentration, primarily attributed to improved dispersion and adequate water for the cement hydration process. The maximum increments in flexural and compressive strengths, compared to plain CSS, were significant, reaching up to approximately 30% for flexural strength and 41% for compressive strength, for the specimen containing 18% cement, 12% water, and 0.17% CNM. This improvement was attributed to the nanoparticles' pore-filling function, acceleration of hydration, regulation of free water, and facilitation of crack-bridging mechanisms in the geocomposite. Further decreases in cement and water content adversely impacted the piezoresistive performance of the composite. Notably, specimens containing 8% cement (across all water content variations) and 10% cement (with 8% and 12% water content) showed a lack of piezoresistive responses. In contrast, specimens containing 14% and 18% cement displayed substantial sensitivity, evidenced by elevated gauge factors, under loading conditions.
Sven Schulze et al 2024 Meas. Sci. Technol. 35 085020
The 2019 redefinition of the kilogram not only changes the way mass is defined but also broadens the horizon for a direct realization of other standards. The true becquerel project at the national institute of standards and technology is creating a new paradigm for realization and dissemination of radionuclide activity. Standard reference materials for radioactivity are supplied as aqueous solutions of specific radionuclides which are characterized by massic activity in the units becquerel per gram of solution, Bq/g. The new method requires measuring the mass of a few milligrams of dispensed radionuclide liquid solution. An electrostatic force balance is used, due to its suitability for a milligram mass range. The goal is to measure the mass of dispensed fluid of 1 mg–5 mg with a relative uncertainty of less than 0.05%. A description of the balance operation is presented. Results of preliminary measurements with a reference mass indicate relative standard deviations less than 0.5% for tens of tests and differ 0.54% or less from an independent measurement of the reference mass.
Jing Yang et al 2024 Meas. Sci. Technol. 35 086130
Owing to the merits of automatic feature extraction and depth structure, intelligent fault diagnosis based on deep neural networks has become a great concern. However, the non-fault state monitoring data volume of actual industrial machinery is rich, whereas the fault state data volume is insufficient and weak. Furthermore, achieving multiple mixed-fault diagnoses using skewed data distributions is extremely difficult. A feature reconstruction and sparse auto-encoder (AE) model-based diagnosis method for multiple mixed faults is proposed in this study to bridge these gaps. Such a feature reconstruction algorithm is designed and employed to address the following issues: (1) expensive computing resulting from the long sequential features of vibration monitoring data and (2) the extraction problem caused by the submersion of scarce data features. Furthermore, an adaptive loss function was formulated, and a deep AE network was constructed to identify the health status and determine the fault level. Diagnoses of artificial and real faults verify the availability and superiority of the proposed scheme, demonstrating the adaptability and robustness of these hyperparameters.
Ahmad Satya Wicaksana et al 2024 Meas. Sci. Technol.
The ALICE experiment is one of the four experiments at the Large Hadron Collider (LHC) designed to investigate the status of matter under very high energy densities produced during heavy-ion collisions. The ALICE Inner Tracking System (ITS) consists of seven concentric cylindrical layers of monolithic silicon pixel sensors known as ALICE Pixel Detector (ALPIDE). The sensors are used to reconstruct the paths of charged particles generated in the collisions. The sensor alignment of the detector must be adjusted to a high precision standard. The adjustment objective is to obtain a detector that can undertake high-resolution measurements. This paper introduces a method for measuring the reference markers utilized in sensor alignment determination. Markers engraved at the chip corners have been detected using the Hough transform, Canny edge detection, and template matching techniques. The distances between two markers are measured to determine the accuracy of the pixel sensor alignment before and after assembly. The proposed methods exhibit an accuracy exceeding 99% and demonstrate high speed analysis. The average processing times for detecting the circle and cross markers are 105.9 ms/image and 113.8 ms/image, respectively. The sensor alignment of the detector must be adjusted to a high precision standard. However, recent studies have shown deviations of up to 5um above the desired value in the measured sensor position. Such deviations do not represent a major issue, nevertheless it is important to measure them in order to speed-up and make more accurate the recursive track-based alignment procedure used to reconstruct the position of each pixel sensor in the tracking detector. The proposed method offers a promising solution to deliver precise and rapid measurements for a large number of examined objects.
Robin Erik Aschan et al 2024 Meas. Sci. Technol.
We delve into theoretical and experimental considerations for determining the spectral bidirectional transmittance distribution function (BTDF) of thick samples across a broad viewing zenith angle range. Nominally, BTDF is defined as the ratio of transmitted radiance to incident irradiance measured from the same plane. However, when employing thick samples for BTDF measurements, the viewing plane of the transmitted beam may shift from the front to the rear surface of the sample, altering the measurement geometry compared to using the sample front surface as the reference plane. Consequently, the viewing zenith angle from the sample rear surface increases relative to the sample front surface, and the sample-to-detector-aperture distance decreases by an amount corresponding to the sample thickness. We introduce a method for determining the BTDF of thick samples, considering the transformation of practical measurement results to a scenario where the measurements are conducted at a very large distance from the sample. To validate the method, we utilize a
BTDF facility equipped with two instruments that significantly differ in their sample-to-detector-aperture distances. We evaluate the impact of a 2 mm sample thickness on the BTDF by assessing the ratio of transmitted and incident radiant fluxes as a function of viewing zenith angle relative to the sample rear surface. The evaluation is conducted in the wavelength range from 550 nm to 1450 nm in 300 nm steps, and in the viewing zenith angle range from -70° to 70° in 5° steps. Measurements are performed in-plane at an incident zenith angle of 0°. It is concluded that consistent determination of BTDF of a thick sample is possible by converting the experimental parameters of the real measurements at relatively short distances from the sample to correspond to those that would be obtained from measurements at very large distances from the sample.
Marcus Soter et al 2024 Meas. Sci. Technol. 35 085605
Platelets are activated immediately when contacting with non-physiological surfaces. Minimization of surface-induced platelet activation is important not only for platelet storage but also for other blood-contacting devices and implants. Chemical surface modification tunes the response of cells to contacting surfaces, but it requires a long process involving many regulatory challenges to transfer into a marketable product. Biophysical modification overcomes these limitations by modifying only the surface topography of already approved materials. The available large and random structures on platelet storage bags do not cause a significant impact on platelets because of their smallest size (only 1–3 μm) compared to other cells. We have recently demonstrated the feasibility of the mask-free nanoprint fluid force microscope (FluidFM) technology for writing dot-grid and hexanol structures. Here, we demonstrated that the technique allows the fabrication of nanostructures of varying features including grid, circle, triangle, and Pacman-like structures. Characteristics of nanostructures including height, width, and cross-line were analyzed and compared using atomic force microscopy imaging. Based on the results, we identified several technical issues, such as the printing direction and shape of structures that directly altered nanofeatures during printing. Importantly, both geometry and interspace governed the degree of platelet adhesion, especially, the structures with triangular shapes and small interspaces prevent platelet adhesion better than others. We confirmed that FluidFM is a powerful technique to precisely fabricate a variety of desired nanostructures for the development of platelet/blood-contacting devices if technical issues during printing are well controlled.
Tianyi Yu et al 2024 Meas. Sci. Technol.
In the prediction of bearing fault remaining useful life (RUL), the identification and feature extraction of early bearing faults are very important. In order to improve the accuracy of early fault RUL prediction, a bearing fault RUL prediction model based on weighted variable loss degradation characteristics is proposed. The model is composed of a stack denoising autoencoder (SDAE) module guided by variable loss, a signal-to-noise feature adaptive weighting module and a Long-short Term Memory (LSTM) degradation characteristics extraction and regression output module. Firstly, this model improves the ability of SDAE model to extract weak fault features by ascending dimension learning and variable loss function. Then, an adaptive weighting matrix is generated according to the test signal to modulate the weight vector of SDAE. Finally, the hidden layer features of SDAE were input into LSTM model to extract the bearing state degradation features and realize the RUL prediction of bearing faults. The experimental results show that the proposed model can accurately predict the RUL of the test data in the early fault stage and the fault development stage. The proposed model can give early fault warning to the bearing state.
Jiawei Liu et al 2024 Meas. Sci. Technol. 35 086314
Augmentation of the Global Navigation Satellite System by low earth orbit (LEO) satellites is a promising approach benefiting from the advantages of LEO satellites. This, however, requires errors and biases in the satellite downlink navigation signals to be calibrated, modeled, or eliminated. This contribution introduces an approach for in-orbit calibration of the phase center offsets (PCOs) and code hardware delays of the LEO downlink navigation signal transmitter/antenna. Using the satellite geometries of Sentinel-3B and Sentinel-6A as examples, the study analyzed the formal precision and bias influences for potential downlink antenna PCOs and hardware delays of LEO satellites under different ground network distributions, and processing periods. It was found that increasing the number of tracking stations and processing periods can improve the formal precision of PCOs and hardware delay. Less than 3.5 mm and 3 cm, respectively, can be achieved with 10 stations and 6 processing days. The bias projections of the real-time LEO satellite orbital and clock errors can reach below 3 mm in such a case. For near-polar LEO satellites, stations in polar areas are essential for strengthening the observation model.