Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Journal Description

JMIR Biomedical Engineering (JBME) is a new sister journal of JMIR (the leading open-access journal in health informatics), focusing on the application of engineering principles, technologies, and medical devices to medicine and biology. 

As an open access journal, we are read by clinicians and patients alike and have (as are all JMIR journals) a focus on readable and applied science reporting the design and evaluation of health innovations and emerging technologies. We publish original research, viewpoints, and reviews (both literature reviews and medical device/technology/app reviews).

JMIR Biomedical Engineering publishes since 2016 and features a rapid and thorough peer-review process. Articles are carefully copyedited and XML-tagged, ready for submission in PubMed Central.

Be a founding author of this new journal and submit your paper today!



Recent Articles:

  • Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Usability and Practicality of a Novel Mobile Attachment for Aural Endoscopy (endoscope-i): Formative Usability Study


    Background: Our aims were to determine the usability and practicality of the endoscope-i system, a novel mobile attachment for aural endoscopy. This incorporated assessing the ease of use of the endoscope-i for different professionals, and ultimately improving the system by receiving constructive feedback. Objective: Our objectives were to assess the ease of the endoscope-i system in conducting an aural examination and to assess its feasibility for integrating its use into clinical practice. We looked to assess its ease, effectiveness, and efficiency; to compare this to current practices with otoscopes; and to determine whether participants perceived the system to be able to produce an image of sufficient quality to make a clinical assessment. Finally, we wanted to assess the usefulness of the current training given for using the system, and we sought to gain feedback for the product from the differing specialists. Methods: A formative usability study of the endoscope-i system was conducted with 5 health care professionals. Each session lasted 40 minutes and involved audio/video consent, a hands-on session, a private semistructured interview, and an option to discuss the device with a company representative. Results: All participants found the endoscope-i system easy to use. The image quality was perceived to be greater than that achieved by current otoscopes. The ability to record images and view them retrospectively was also seen as a positive. Conclusions: This study has not identified any significant issues relating to the design, functionality, or application of the endoscope-i. Participants perceived the system as superior to current options with a directly positive impact on their clinical practice.

  • Source: freepik; Copyright:; URL:; License: Licensed by JMIR.

    Fingerprint Biometric System Hygiene and the Risk of COVID-19 Transmission


    Biometric systems use scanners to verify the identity of human beings by measuring the patterns of their behavioral or physiological characteristics. Some biometric systems are contactless and do not require direct touch to perform these measurements; others, such as fingerprint verification systems, require the user to make direct physical contact with the scanner for a specified duration for the biometric pattern of the user to be properly read and measured. This may increase the possibility of contamination with harmful microbial pathogens or of cross-contamination of food and water by subsequent users. Physical contact also increases the likelihood of inoculation of harmful microbial pathogens into the respiratory tract, thereby triggering infectious diseases. In this viewpoint, we establish the likelihood of infectious disease transmission through touch-based fingerprint biometric devices and discuss control measures to curb the spread of infectious diseases, including COVID-19.

  • A person sleeps with gold standard of sleep monitoring: polysomnography. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Current Status and Future Challenges of Sleep Monitoring Systems: Systematic Review


    Background: Sleep is essential for human health. Considerable effort has been put into academic and industrial research and in the development of wireless body area networks for sleep monitoring in terms of nonintrusiveness, portability, and autonomy. With the help of rapid advances in smart sensing and communication technologies, various sleep monitoring systems (hereafter, sleep monitoring systems) have been developed with advantages such as being low cost, accessible, discreet, contactless, unmanned, and suitable for long-term monitoring. Objective: This paper aims to review current research in sleep monitoring to serve as a reference for researchers and to provide insights for future work. Specific selection criteria were chosen to include articles in which sleep monitoring systems or devices are covered. Methods: This review investigates the use of various common sensors in the hardware implementation of current sleep monitoring systems as well as the types of parameters collected, their position in the body, the possible description of sleep phases, and the advantages and drawbacks. In addition, the data processing algorithms and software used in different studies on sleep monitoring systems and their results are presented. This review was not only limited to the study of laboratory research but also investigated the various popular commercial products available for sleep monitoring, presenting their characteristics, advantages, and disadvantages. In particular, we categorized existing research on sleep monitoring systems based on how the sensor is used, including the number and type of sensors, and the preferred position in the body. In addition to focusing on a specific system, issues concerning sleep monitoring systems such as privacy, economic, and social impact are also included. Finally, we presented an original sleep monitoring system solution developed in our laboratory. Results: By retrieving a large number of articles and abstracts, we found that hotspot techniques such as big data, machine learning, artificial intelligence, and data mining have not been widely applied to the sleep monitoring research area. Accelerometers are the most commonly used sensor in sleep monitoring systems. Most commercial sleep monitoring products cannot provide performance evaluation based on gold standard polysomnography. Conclusions: Combining hotspot techniques such as big data, machine learning, artificial intelligence, and data mining with sleep monitoring may be a promising research approach and will attract more researchers in the future. Balancing user acceptance and monitoring performance is the biggest challenge in sleep monitoring system research.

  • Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution + Noncommercial (CC-BY-NC).

    Video Cloud Services for Hospitals: Designing an End-to-End Cloud Service Platform for Medical Video Storage and Secure Access


    The amount of medical video data that has to be securely stored has been growing exponentially. This rapid expansion is mainly caused by the introduction of higher video resolution such as 4K and 8K to medical devices and the growing usage of telemedicine services, along with a general trend toward increasing transparency with respect to medical treatment, resulting in more and more medical procedures being recorded. Such video data, as medical data, must be maintained for many years, resulting in datasets at the exabytes scale that each hospital must be able to store in the future. Currently, hospitals do not have the required information and communications technology infrastructure to handle such large amounts of data in the long run. In this paper, we discuss the challenges and possible solutions to this problem. We propose a generic architecture for a holistic, end-to-end recording and storage platform for hospitals, define crucial components, and identify existing and future solutions to address all parts of the system. This paper focuses mostly on the recording part of the system by introducing the major challenges in the area of bioinformatics, with particular focus on three major areas: video encoding, video quality, and video metadata.

  • Can a machine learning-based App and saying "aaaaah" into the microphone support diagnosing Parkinson’s Disease? Source: The Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Robust Feature Engineering for Parkinson Disease Diagnosis: New Machine Learning Techniques


    Background: Parkinson disease (PD) is a common neurodegenerative disorder that affects between 7 and 10 million people worldwide. No objective test for PD currently exists, and studies suggest misdiagnosis rates of up to 34%. Machine learning (ML) presents an opportunity to improve diagnosis; however, the size and nature of data sets make it difficult to generalize the performance of ML models to real-world applications. Objective: This study aims to consolidate prior work and introduce new techniques in feature engineering and ML for diagnosis based on vowel phonation. Additional features and ML techniques were introduced, showing major performance improvements on the large mPower vocal phonation data set. Methods: We used 1600 randomly selected /aa/ phonation samples from the entire data set to derive rules for filtering out faulty samples from the data set. The application of these rules, along with a joint age-gender balancing filter, results in a data set of 511 PD patients and 511 controls. We calculated features on a 1.5-second window of audio, beginning at the 1-second mark, for a support vector machine. This was evaluated with 10-fold cross-validation (CV), with stratification for balancing the number of patients and controls for each CV fold. Results: We showed that the features used in prior literature do not perform well when extrapolated to the much larger mPower data set. Owing to the natural variation in speech, the separation of patients and controls is not as simple as previously believed. We presented significant performance improvements using additional novel features (with 88.6% certainty, derived from a Bayesian correlated t test) in separating patients and controls, with accuracy exceeding 58%. Conclusions: The results are promising, showing the potential for ML in detecting symptoms imperceptible to a neurologist.

  • Dynamic Platform Swing Walkway. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Effect of Platform Swing Walkway on Locomotor Behavior in Children With Diplegic Cerebral Palsy: Randomized Controlled Trial


    Background: Limited attention has been given to the effectiveness of the platform swing walkway, which is a common way to improve gait pattern through activation of sensory stimuli (visual, auditory, vestibular, and somatosensory). Objective: The objective of this study was to determine the effect of a platform swing walkway on gait parameters in children with diplegic cerebral palsy (CP). Methods: A total of 30 children of both sexes (aged 6-8 years) with diplegic CP were enrolled in this study. They were randomly assigned into two groups of equal number: the control group (n=15) and the study group (n=15). The control group received the conventional physical therapy plan, whereas the study group received the same conventional physical therapy program in addition to gait training on a platform swing walkway. Temporal parameters during the gait cycle were collected using gait tracker video analysis, and the Growth Motor Function Measure Scale (GMFM-88) was used to assess standing and walking (Dimensions D and E) before and after the treatment program. Results: A statistically significant improvement in both groups was noted when comparing the mean values of all measured variables before and after treatment (P≤.05). There were significant differences between the control and study groups with respect to all measured variables, which favored the study group when comparing the posttreatment outcomes (P≤.05). Conclusions: Results suggest that gait training on platform swing walkways can be included as an alternative therapeutic modality to enhance gait parameters and gross motor function in children with diplegic CP. Trial Registration: NTC04246658;

  • Model of a telerehabilitation program for tracking knee angle. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Telerehabilitation for Patients With Knee Osteoarthritis: A Focused Review of Technologies and Teleservices


    Background: Telerehabilitation programs are designed with the aim of improving the quality of services as well as overcoming existing limitations in terms of resource management and accessibility of services. This review will collect recent studies investigating telerehabilitation programs for patients with knee osteoarthritis while focusing on the technologies and services provided in the programs. Objective: The main objective of this review is to identify and discuss the modes of service delivery and technologies in telerehabilitation programs for patients with knee osteoarthritis. The gaps, strengths, and weaknesses of programs will be discussed individually. Methods: Studies published in English since 2000 were retrieved from the EMBASE, Scopus, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), PubMed, Physiotherapy Evidence Database (PEDro), and PsycINFO databases. The search words “telerehabilitation,” “telehealth,” “telemedicine,” “teletherapy,” and “ehealth” were combined with “knee” and “rehabilitation” to generate a data set of studies for screening and review. The final group of studies reviewed here includes those that implemented teletreatment for patients for at least 2 weeks of rehabilitation. Results: In total, 1198 studies were screened, and the full text of 154 studies was reviewed. Of these, 38 studies were included, and data were extracted accordingly. Four modes of telerehabilitation service delivery were identified: phone-based, video-based, sensor-based, and expert system–based telerehabilitation. The intervention services provided in the studies included information, training, communication, monitoring, and tracking. Video-based telerehabilitation programs were frequently used. Among the identified services, information and educational material were introduced in only one-quarter of the studies. Conclusions: Video-based telerehabilitation programs can be considered the best alternative solution to conventional treatment. This study shows that, in recent years, sensor-based solutions have also become more popular due to rapid developments in sensor technology. Nevertheless, communication and human-generated feedback remain as important as monitoring and intervention services.

  • Centers for Disease Control and Prevention (CDC) computer technology specialist holding a square-shaped, gene sequencing computer chip. This chip was designed to quicken the processes involved in the identification of viral DNA. Source: CDC Public Health Image Library; Copyright: James Gathany; URL:; License: Public Domain (CC0).

    Innovation in Pediatric Medical Devices: Proceedings From The West Coast Consortium for Technology & Innovation in Pediatrics 2019 Annual Stakeholder Summit


    Pediatric medical devices cover a broad array of indications and risk profiles, and have helped to reduce disease burden and improve quality of life for numerous children. However, many of the devices used in pediatrics are not intended for or tested on children. Several barriers have been identified that pose difficulties in bringing pediatric medical devices to the market. These include a small market and small sample size; unique design considerations; regulatory complexities; lack of infrastructure for research, development, and evaluation; and low return on investment. In 2007, the Food and Drug Administration (FDA) created the Pediatric Device Consortia (PDC) Grants Program under the administration of the Office of Orphan Products Development. In 2018, the FDA awarded over US $30 million to five new PDCs. The West Coast Consortium for Technology & Innovation in Pediatrics (CTIP) is one of these PDCs and is centered at the Children’s Hospital Los Angeles. In February 2019, CTIP convened its primary stakeholders to discuss its priorities and activities for the new grant cycle. In this paper, we have presented a report of the summit proceedings to raise awareness and advocate for patients and pediatric medical device innovators as well as to inform the activities and priorities of other organizations and agencies engaged in pediatric medical device development.

  • EDA wearable device. Source: Image created by the authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Challenges and Opportunities in Collecting and Modeling Ambulatory Electrodermal Activity Data


    Background: Ambulatory assessment of electrodermal activity (EDA) is an emerging technique for capturing individuals’ autonomic responses to real-life events. There is currently little guidance available for processing and analyzing such data in an ambulatory setting. Objective: This study aimed to describe and implement several methods for preprocessing and constructing features for use in modeling ambulatory EDA data, particularly for measuring stress. Methods: We used data from a study examining the effects of stressful tasks on EDA of adolescent mothers (AMs). A biosensor band recorded EDA 4 times per second and was worn during an approximately 2-hour assessment that included a 10-min mother-child videotaped interaction. The initial processing included filtering noise and motion artifacts. Results: We constructed the features of the EDA data, including the number of peaks and their amplitude as well as EDA reactivity, quantified as the rate at which AMs returned to baseline EDA following an EDA peak. Although the pattern of EDA varied substantially across individuals, various features of EDA may be computed for all individuals enabling within- and between-individual analyses and comparisons. Conclusions: The algorithms we developed can be used to construct features for dry-electrode ambulatory EDA, which can be used by other researchers to study stress and anxiety.

  • Source: EMFIT / Placeit; Copyright: EMFIT / Placeit; URL:; License: Licensed by JMIR.

    A Contact-Free, Ballistocardiography-Based Monitoring System (Emfit QS) for Measuring Nocturnal Heart Rate and Heart Rate Variability: Validation Study


    Background: Heart rate (HR) and heart rate variability (HRV) measurements are widely used to monitor stress and recovery status in sedentary people and athletes. However, effective HRV monitoring should occur on a daily basis because sparse measurements do not allow for a complete view of the stress-recovery balance. Morning electrocardiography (ECG) measurements with HR straps are time-consuming and arduous to perform every day, and thus compliance with regular measurements is poor. Contact-free, ballistocardiography (BCG)-based Emfit QS is effortless for daily monitoring. However, to the best of our knowledge, there is no study on the accuracy of nocturnal HR and HRV measured via BCG under real-life conditions. Objective: The aim of this study was to evaluate the accuracy of Emfit QS in measuring nocturnal HR and HRV. Methods: Healthy participants (n=20) completed nocturnal HR and HRV recordings at home using Emfit QS and an ECG-based reference device (Firstbeat BG2) during sleep. Emfit QS measures BCG by a ferroelectret sensor installed under a bed mattress. HR and the root mean square of successive differences between RR intervals (RMSSD) were determined for 3-minute epochs and the sleep period mean. Results: A trivial mean bias was observed in the mean HR (mean –0.8 bpm [beats per minute], SD 2.3 bpm, P=.15) and Ln (natural logarithm) RMSSD (mean –0.05 ms, SD 0.25 ms, P=.33) between Emfit QS and ECG. In addition, very large correlations were found in the mean values of HR (r=0.90, P<.001) and Ln RMSSD (r=0.89, P<.001) between the devices. A greater amount of erroneous or missing data (P<.001) was observed in the Emfit QS measurements (28.3%, SD 14.4%) compared with the reference device (1.1%, SD 2.3%). The results showed that 5.0% of the mean HR and Ln RMSSD values were outside the limits of agreement. Conclusions: Based on the present results, Emfit QS provides nocturnal HR and HRV data with an acceptable, small mean bias when calculating the mean of the sleep period. Thus, Emfit QS has the potential to be used for the long-term monitoring of nocturnal HR and HRV. However, further research is needed to assess reliability in HR and HRV detection.

  • Source: Adobe Stock; Copyright: Ocskay Mark; URL:; License: Licensed by JMIR.

    Heart Rate and Oxygen Saturation Monitoring With a New Wearable Wireless Device in the Intensive Care Unit: Pilot Comparison Trial


    Background: Continuous cardiac monitoring with wireless sensors is an attractive option for early detection of arrhythmia and conduction disturbances and the prevention of adverse events leading to patient deterioration. We present a new sensor design (SmartCardia), a wearable wireless biosensor patch, for continuous cardiac and oxygen saturation (SpO2) monitoring. Objective: This study aimed to test the clinical value of a new wireless sensor device (SmartCardia) and its usefulness in monitoring the heart rate (HR) and SpO2 of patients. Methods: We performed an observational study and monitored the HR and SpO2 of patients admitted to the intensive care unit (ICU). We compared the device under test (SmartCardia) with the ICU-grade monitoring system (Dräger-Healthcare). We defined optimal correlation between the gold standard and the wireless system as <10% difference for HR and <4% difference for SpO2. Data loss and discrepancy between the two systems were critically analyzed. Results: A total of 58 ICU patients (42 men and 16 women), with a mean age of 71 years (SD 11), were included in this study. A total of 13.49 (SD 5.53) hours per patient were recorded. This represents a total recorded period of 782.3 hours. The mean difference between the HR detected by the SmartCardia patch and the ICU monitor was 5.87 (SD 16.01) beats per minute (bias=–5.66, SD 16.09). For SpO2, the average difference was 3.54% (SD 3.86; bias=2.9, SD 4.36) for interpretable values. SmartCardia’s patch measures SpO2 only under low-to-no activity conditions and otherwise does not report a value. Data loss and noninterpretable values of SpO2 represented 26% (SD 24) of total measurements. Conclusions: The SmartCardia device demonstrated clinically acceptable accuracy for HR and SpO2 monitoring in ICU patients.

  • Source: freepik; Copyright: rawpixel; URL:; License: Licensed by JMIR.

    Longitudinal Magnetic Resonance Imaging as a Potential Correlate in the Diagnosis of Alzheimer Disease: Exploratory Data Analysis


    Background: Alzheimer disease (AD) is a degenerative progressive brain disorder where symptoms of dementia and cognitive impairment intensify over time. Numerous factors exist that may or may not be related to the lifestyle of a patient that result in a higher risk for AD. Diagnosing the disorder in its beginning period is important, and several techniques are used to diagnose AD. A number of studies have been conducted on the detection and diagnosis of AD. This paper reports the empirical study performed on the longitudinal-based magnetic resonance imaging (MRI) Open Access Series of Brain Imaging dataset. Furthermore, the study highlights several factors that influence the prediction of AD. Objective: This study aimed to correlate the effect of various factors such as age, gender, education, and socioeconomic background of patients with the development of AD. The effect of patient-related factors on the severity of AD was assessed on the basis of MRI features, Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR), estimated total intracranial volume (eTIV), normalized whole brain volume (nWBV), and Atlas Scaling Factor (ASF). Methods: In this study, we attempted to establish the role of longitudinal MRI in an exploratory data analysis (EDA) of AD patients. EDA was performed on the dataset of 150 patients for 343 MRI sessions (mean age 77.01 [SD 7.64] years). The T1-weighted MRI of each subject on a 1.5-Tesla Vision (Siemens) scanner was used for image acquisition. Scores of three features, MMSE, CDR, and ASF, were used to characterize the AD patients included in this study. We assessed the role of various features (ie, age, gender, education, socioeconomic status, MMSE, CDR, eTIV, nWBV, and ASF) on the prognosis of AD. Results: The analysis further establishes the role of gender in the prevalence and development of AD in older people. Moreover, a considerable relationship has been observed between education and socioeconomic position on the progression of AD. Also, outliers and linearity of each feature were determined to rule out the extreme values in measuring the skewness. The differences in nWBV between CDR=0 (nondemented), CDR=0.5 (very mild dementia), and CDR=1 (mild dementia) are significant (ie, P<.01). Conclusions: A substantial correlation has been observed between the pattern and other related features of longitudinal MRI data that can significantly assist in the diagnosis and determination of AD in older patients.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Changes in the contents of the oral flora, in gingival hypertrophy caused by fixed orthodontic appliances

    Date Submitted: Sep 11, 2020

    Open Peer Review Period: Sep 11, 2020 - Nov 6, 2020

    Background: The placement of orthodontic apparatus in the oral cavity, according to the literature, should influence the alteration of oral flora, especially the subgingival one. The purpose of the st...

    Background: The placement of orthodontic apparatus in the oral cavity, according to the literature, should influence the alteration of oral flora, especially the subgingival one. The purpose of the study is to evaluate the subgingival flora of patients with fixed orthodontic appliances, regardless of placement time. Objective: The purpose of the study is to evaluate the subgingival flora of patients with fixed orthodontic appliances, regardless of placement time. Methods: In 3 cases of patients with fixed orthodontic appliances, a bacterial sample of the gingival sulcus was taken for laboratory examination. Patients were clinically evaluated for the presence or tendency, of having gingival hypertrophy. Results: Results from the 3 cases included in the study, 1 of them came up with Streptococcus Anginosus positive, Doxycilin-sensitive. The tendency for gingival hypertrophy was maximal 3% to 1.5% respectively in each patient. In the patient with different oral flora, daily topical treatment with tetracycline, placed in the gingival sulcus, was applied. Conclusions: Alteration of the oral flora with the placement of fixed orthodontic appliances is not a fully verifiable fact, as it indicates the patient's follow-up, at the time of placement of the apparatus and until removal after orthodontic treatment, depending on the 2-3-year period of treatment. The tendency for gingival hypertrophy is apparently high, versus the presence of fixed orthodontic apparatus.

  • Integrating Artifact Detection with Clinical Decision Support Systems: Observational Study

    Date Submitted: Aug 13, 2020

    Open Peer Review Period: Aug 13, 2020 - Oct 8, 2020

    Background: Clinical decision support systems (CDSS) have the potential to lower patient mortality and morbidity rates. However, signal artifacts present in physiologic data affect the reliability and...

    Background: Clinical decision support systems (CDSS) have the potential to lower patient mortality and morbidity rates. However, signal artifacts present in physiologic data affect the reliability and accuracy of CDSS. Moreover, patient monitors and other medical devices generate false alarms while processing artifactual data. This leads to alarm fatigue due to increased noise levels, staff disruption, and staff desensitization in busy critical care environments. Thereby, adversely affecting the quality of care at the patient bedside. Hence, artifact detection (AD) algorithms play a crucial role in assessing the quality of physiologic data and mitigating the impact of these artifacts. Objective: Recently, we developed a novel AD framework for integrating AD algorithms with CDSS. The framework was designed with features to support real-time implementation within critical care. In this research, we evaluate the framework and its features in a false alarm reduction study. We develop static framework component models followed by dynamic framework compositions to formulate four CDSS. We evaluate these formulations using neonatal patient data, and validate the six framework features of flexibility, reusability, signal quality indicator standardization, scalability, customizability, and real-time implementation support. Methods: We develop four exemplar static AD components with standardized requirements and provisions interfaces facilitating interoperability of framework components. These AD components are mixed and matched into four different AD compositions to mitigate artifacts. Each AD composition is integrated with a novel static clinical event detection (CED) component to formulate and evaluate dynamic CDSS for arterial oxygen saturation (SpO2) alarms generation. Results: With a sensitivity of 80%, the lowest achievable SpO2 false alarm rate is 39%. This demonstrates the utility of the framework in identifying the optimal dynamic composition to serve a given clinical need. Conclusions: The framework features including reusability, signal quality indicator standardization, scalability, and customizability allow for novel CDSS formulations to be evaluated and compared. The optimal solution for a CDSS can then be hard-coded and integrated within clinical workflows for real-time implementation. Flexibility to serve different clinical needs and standardized component interoperability of the framework support the potential for real-time clinical implementation of AD.