Friday, September 24, 2021

A Survey of Body Area Networks

This paper is written for my Wireless Network Security presentation

A Survey of Body Area Networks 

Celal Ergun 

Istanbul Commerce University 

celalergun@gmail.com 

Abstract 


Body Area Networks (BAN) refers to a data acquisition system which uses wireless communications to deliver the collected data. There are medical and non-medical uses of these systems. The general idea is to use the low power low speed communications to log the sensor outputs. There are a number of sensors for most of the measurements such as temperature, pressure, movement, acceleration and like. This area of the technology has its own challenges, e.g., low variety of sensors, battery lifetime, communication speed and communication range. 

1. Introduction 

The term BAN (Body Area Networks) is used for measuring body condition via sensors and sending the data over a network connection. This technology is not anywhere near what we are expecting of. But the good news is that its emerging and we have lots of new computing chips and sensors in comparison with the last decade. There are a number of papers published on this subject on different areas of the domain. Some of those papers are on battery technology, including self-charging systems. Most of the rest is concentrating on networking and thus, network security.  

2. BAN in a nutshell 

BAN systems are consisting of sensors, computing units and communication interfaces. When it comes to sensors one would find out that there are three kinds of sensors.  

a. Implant sensors 

b. Skin mounted sensors 

c. External sensors 

Implant sensors are hard to install and hard to maintain kind of sensors, usually required medical operations to embed into the subject's body. They usually used to monitor heart, lungs, pancreas and kidneys. There is an additional problem to these sensors: power. It is not possible to use nuclear long-life batteries in human body. These sensors mostly run with a battery and this battery needs to be changed from time to time. To overcome this situation, one could use a wireless power transmission to recharge the batteries [1]. Most of the heart implants use a long-life non-rechargeable battery and they need to be replaced every six years. However, Dagdeviren et al. [2] invented a self-charging device which uses piezoelectric materials and harvesting the mechanical body energy in form of electricity.  

Skin mounted sensors are easy to use and easy to recharge devices because they are not needed a medical operation to install. Although they are limited in terms of information collection about the body, they have their place. Skin mounted sensors are useful to measure blood pressure, heart rate, stress levels, blood sugar etc. They have no battery problem due to their nature since they are surface mounted mostly with a Velcro strap. Usually, they are mounted to the wrist and chest of the subjects. 

External sensors are the least informative sensors. They are used by athletes such as hikers, mountaineers, swimmers etc. Usually they measure atmospheric pressure, altitude, environment temperature, speed and acceleration. They are the least prone systems in terms of battery power. They would use high-capacity power-banks and even power outlets to run. 

A BAN device not always need a sensor to work, however. There are other devices that might be considered a BAN device and has no sensors or whatsoever. Some of the examples of that devices are, including but not limited to medicine reminders, step counters, calorie counters, wireless payment devices etc. 

Now let’s look at the network part which makes a BAN unit a network device. 

3. Networking and topology 

There are four topological schemes on body area networks. 

a. In-body link 

b. On-body communications 

c. Body-to-body link 

d. Off-body link 

BAN devices use different communication protocols. Most popular communication protocols are: WiFi, Zigbee, Bluetooth and IEEE 802.15.6. 

Bluetooth communication is the most popular way when it comes to the wearable devices. Since it is proven and reliable in short ranges, they can easily be used in smartphone powered applications.  

Smartphones are usually the hub of these systems. One or more sensors gather data, make computations on it, store in a volatile or nonvolatile means of memory and then the device will send it over the Bluetooth to the smartphone. 

There are also web sites to display and share this information. One of the most popular examples is the MapMyRun (formerly known as Endomondo) service [3]. The service tracks the training path, measures maximum and average speed and of course, sharing it with the friends.  

Bluetooth communications are common, easy to implement, inexpensive due to the cheap communication chips, low power and are field-proven. There are also audio and visual aids which can be considered BAN systems that uses Bluetooth communications. 

However, when it comes to BAN communications, we have another choice: IEEE 802.15.6. This communication protocol has 3 different speeds: 250 Kbps, 40 Kbps and 20 Kbps. There are two different addressing protocols which uses 16 and 64 bits addressing. It has PHY and MAC (Media Access Control) specifications. The communication range is limited due to the 

a. Security considerations 

b. Low power requirements 

802.15.6 is specifically designed for BAN communications. Body implants and skin (surface) mounted sensors may use it. Their operating frequencies are selected among the safest frequencies for human body. 

4. Security 

Security of BAN networks are divided into four main parts of OSI (Open Systems Interconnection) layers 

a. Physical: Jamming and tempering 

b. Link: Collision, unfairness, exhaustion 

c. Network: Blackhole, Sybil, selective forwarding 

d. Transport: Flooding, de-synchronization 

Jamming and tempering are denial attacks which makes it hard, if not impossible, to communicate with the hub or server systems.  

Collision attacks are also another denial attack type which makes the communication unreliable. Exhaustion attacks are dangerous by the nature but when it comes to critical health support systems, they are lethal. 

Sybil attack is named after a book, which describes the life of a borderline woman. This attack creates many different profiles using a replay attack. 

Flooding and de-synchronization attacks are used to disrupt communications by making transport layer unstable.  

BAN systems are mostly consisting of wireless communications and are mostly prone to PHY attacks. 

These low-power systems cannot implement hard-to-break cryptography schemes due to their low power nature and can be broken within a relatively short time [4]. 

5. References 

[1] Battery-Dynamics Driven TDMA MAC Protocols for Wireless Body-Area Monitoring Networks in Healthcare Applications  

Hang Su,Student Member, IEEE, and Xi Zhang,Senior Member, IEEE 

[2] Recent progress in flexible and stretchable piezoelectric devices for mechanical energy harvesting, sensing and actuation 

Canan Dagdeviren, Pauline Joe, Ozlem L. Tuzman, Kwi-Il 

Park Keon Jae Lee, Yan Shi Yonggang Huang, John A.Rogers 

[3] https://www.mapmyrun.com/ 

[4] Challenges in Wireless Body Area Network  

Muhammad Asam, Tauseef Jamal, Aleena Ajaz PIEAS University, Islamabad, Pakistan 

Muhammad Adeel, Areeb Hassan, 

Superior University, Lahore, Pakistan 

[5] A Survey of Challenges and Applications of Wireless Body Area Network (WBAN) and Role of a Virtual Doctor Server in Existing Architecture 

Deena M. Barakah, Muhammad Ammad-Uddin 

[6] Survey of main challenges (security and privacy) in wireless body area networks for healthcare applications 

Samaher Al-Janabi, Ibrahim Al-Shourbaji, 

Mohammad Shojafar, Shahaboddin Shamshirband 

Saturday, September 11, 2021

Process Based Isolation

Well, that’s a fancy way of saying we run an exe/binary for every job we have so that we don’t have to work with the (most of) concurrency details.

When you have standard jobs to be done in selected intervals, let us say every day, every hour etc. you need timers. When a timer kicks in you start a job. We love the jobs that finish without our intervention. Since life is cruel to us human beings, we need to check every process to see if they are finished, interrupted due to an error or stopped prematurely. 

One of the proper ways to do this is to create a service or a daemon (Windows or Linux) and have a separate thread for every job. Developing multi thread applications is hard. Debugging them is even harder. Multi-threading is a headache for the novice programmers and experienced programmers are not immune to the same problems. Thanks to the Object Oriented Programming paradigm, we can create separate objects that have their own business logic, encapsulate their internal variables etc to isolate these jobs. But what if a thread collapses and drags the rest of the world into its black hole?


(By the way this is our first recording system, back in 2004. A bunch of PCs connected to a monitor via a KVM, and they are recording from ordinary DVB devices)

Let me explain it from the very beginning. I used to be a recognition specialist on audio and video streams. For this job, first I need to record TV and radio broadcasts to analyze, process, extract information (which is called Media Information Retrieval), compress and then backup with the metadata.

Here is one of my early recording applications. It can record one channel at a time. It needs to read its configuration from an ini file to know which hardware it is going to use, which disk and directory to save the media file etc.


Is it working? Yes. But what happens if the process terminates prematurely? It's just between the process and the Windows OS. We never have a chance to know without checking it manually. Checking it manually? C’mon, in which century are we living?

There are several reasons for a process to go down. First one is a memory leak. A rough application gets memory when in need and never returns it to the system. That happens with classical memory allocation paradigm where the coder needs to free what they have created (pun intended). In Delphi, you need to free an object if you have created it. In C++ you need to delete a pointer if you acquired it with new() operator. In C you need to call free() for everything you allocated with malloc(). That’s all. That’s that easy. Well, not. In real life problems, sometimes the coder loses control over their code and forgets to free what they have created. 

Managed memory model clearly solves this problem by employing a garbage collector. This is good and well if all the objects you are using are managed objects. I’ve seen many colleagues whining about a memory leak in their .Net project especially if they work with fancy bitmaps in a classic Windows Forms application. Memory leak in a .Net project? Yes. Coders who never used classic memory allocation model easily forget to use Dispose() method when they need to free system memory back to the operating system. This usually happens with bitmap images in C# .Net and coders who do not think in classical approach easily forget to dispose their objects. Disposing of a bitmap object when you have done with it requires you to write a dispose method for your object that uses the bitmap object. Disposing of your object when you have done with it requires you to write a dispose method in your form when you have done with it. Dispose is contagious. It contaminates its container objects.

Let’s go back to our multi-threading application. We need only one executable to do our job. That application might run one or more threads to do the jobs. If one of our threads collapse we might detect it in a number of ways. We can update a date time information every time our thread finishes a bunch of commands. This way we can say, for example, an hour later that the thread is dead. We may free its resources and restart it all over again. This usually happens when moving the files over a network connection. There are lots of reasons for a move operation to fail. A disk might get corrupted, a file could be left open (hence other processes cannot access it), or some jerk might unplug the network cable.

A DBV-S broadcast consists of a bunch of transport streams. A transport stream could have 10, 20, or 50 radio and TV channels. I noticed that there are small MPEG headers in that stream and I tried to record every packet in this stream to a different file using the stream number information. Voila! It worked!  Just because I was able to code it using pipes, I coded it using pipes. I get a transport stream from the capture card, run it through my "capture graph" of DirectShow filters. Zero decoding, zero CPU usage! All I was doing is to write packages into a different file. I then decided to use a different process for each of these recordings (read it as a different executable running for every channel I record). These programs are connecting to the pipe of the main program's pipe and read their data from it. Here is the result: 


If you are running an application under Windows a few things might happen when you lost connection to the network or to a disk. Windows displays a dialog to inform you that the drive you are trying to reach is not accessible. Windows does that by blocking the messaging queue. Your application will stop even when repainting its own canvas. As soon as your threads try to get messages from the message queue, one thread fails and the others will fail too.

The same thing happens when you run out of memory or other system resources. Your thread collapses and drags the whole application with it. You’ll lose a lot of computing resources and resources mean money. We learned that hard way. After realizing that we can record two channels with one sound card (left and right channels of a stereo input connected to different sound sources) we designed a new application that records multi-channel audio. Adding 4 sound cards to a computer we would record 16 channels at once. 


We needed more information about the recording process, we broadcast UDP messages to the local network informing the management panels. We also added new telemetry points to it like sound level, absolute silence, empty disk space on recording drive, last time we get a message from the application etc. 


Recording an audio file is easy. Once you get the basics you’re done. A .wav consists of a few headers and then sound samples. There are a few problems that could occur and one of the most frequent is running out of disk space. While you record, analyze and move the files to the backup server on time, there will be no problems. We were looking at the management panel’s dashboard to see the whole picture. If all the machines are recording and we have a buffer space on the server, we would sleep well that night.

Up-scaling a simple system is not just multiplying the resources used with the number of channels. It's not that easy. After we started to record 16 records with one application, we noticed that if that application crashes for a reason we lose 16 channels. Since we were reporting the advertisement broadcast logs, we would lose money.

We also had file transformations, frequency analysis of audio files, detection of stripe ads on TV recordings. We decided to run these processes in a different process space. Which, in simple words, we run an exe file for every job. If that exe collapses it does not stop other processes with it. 


OK. Now it's the next question. How many processes could we run at the same time? Our CPU and memory are not infinite resources. There is a limit that we could use them because other processes are also in need of these resources. We can run the infamous ffmpeg to convert our media files from one format to another but converting media files requires decoding and encoding the media. That takes a lot of computing power. But we have other processes to work for our business. Now we need a process scheduler.


Without a process scheduler, we run the applications with parameters as much as we have jobs. For example, let’s say we have 50 channels are recording on a computer. We record files an hour long. At the beginning of a new hour, we close the recording and start a new file. Yes. We would have 50 transcoders, analyzers, file movers etc running at the same time. When we are talking about media recording systems, this means anarchy, chaos and even catastrophe. Because media files are big, they cannot be in cache for a long time and we would lose data (and of course, we learned that hard way too). You can see our CUDA powered stripe advertisement recognition system running as different processes at the same time on our Windows system.


A process scheduler works like a thread queue. Creating and destroying threads takes time. Also having too many threads in your application consumes resources. For this reason, we create a thread and then feed its internal queue with jobs. It's almost the same in our process scheduler. We do not care about the time it takes to start a process because our media files are huge, and it really does not matter. All we want is not to block I/O devices and leave some computing resources to the other processes. 

After all these years and I left the company, I had my own experiences in my pocket. I never use GUI workers anymore (except for the demo purposes). I create console workers and redirect their output to an MDI application. There is a limit to the number of console applications and our MDI parent shows "fake" console applications by printing their STDOUT to its window. Also, it redirects the STDERR to a logger which logs the errors to a database to check later. 

This way I keep the processes at a fixed number. I do not have floating windows all over the desktop. I can connect to this MDI parent from another machine to see its queue and running jobs. If a process fails, it is contained in corresponding process space. Gets removed from the queue and another process will be up and running in seconds.


This way I can keep the number of running jobs at a constant (you can see that it is set to 16 jobs at a time). I see the errors in red, normal console text in green (yes, a little bit nostalgia is good for anyone). 

It can be called "poor man's micro service". Here is a screenshot of a working application:




That's all for now. There might be some points to be touched on.

Monday, January 25, 2021

How slow is Python?

 I know that people who use Python love it more than anything else. It is easy to run, works everywhere, easy to deploy, easy to understand etc... I'm coming from Assembly/C/C++ tradition (or should I say tribe) which values speed over the ease of use. For that reason alone, I love Assembly and C, but I'd rather use C++ because of templates, constant expressions, stl and such. During the pandemic, I did what I love most: self-education. I attended government backed online courses and applied to a new university after my bachelor's degree. No, not one. Three. Yes, I'm now handling three universities (two of them are remote learning facilities) which one of them is something I was yearning to finish: a Master's degree and it is not a remote learning institute. I am a student now and I'm studying for my Master's degree (yes, I use my student public transportation card).

Also, during the pandemic, I decided to learn new languages as I'm cooking and baking at home. I started with Python. Python is really easy compared to most other languages. After attending to "Python and Tensorflow for Data Science" course at the magnificent BTK Akademi (government backed high quality education I mentioned before) I got the basics of the language. It is elegant. Really easy as it's advertised by the most. But it comes with a price: speed.

How would you benchmark a new language? I usually write a little program to display a time result compared to something I know well. I decided to print prime numbers up to ten million (in fact I request the upper limit from the user). I use "time" under my Ubuntu to measure the running time. Running the code under Jupyter Notebook took more than I would cope with. I decided to run it up to a million then. I ran a little script from the command line:


The code is easy (as I advertised, even I advertised that, remember). Here is a screenshot of the Visual Studio Code that I used to display (it is way sexier than the kate, or any Karen by the way).



I am known to be a patient guy. Even I get bored of the waiting for the process to end. It runs and runs and runs. Here is the result:


I knew that there was something wrong with it. I decided to write the same algorithm with the C++ language, using the same logic as much as possible so that the comparison would be ethical (you know the rant over the language comparison blogs, those guys and gals are as fanatic as football fanatics). Here is the (comparably) same code written in C++:



I am not saying that this approach to find the primes is the ultimate way to find the primes. It is an optimized code for the job. The sole purpose of this code is to benchmark two languages. Not to find the best way to find the prime numbers of any sort. Here it comes, ladies and gentlemen, and the winner is:


Yes. Python finished the same job in 4 minutes and more than 5 seconds. For the C++, it took merely 20.5 seconds. 8% of the former. More than 12 times faster.

Will I use Python? If the job requires it, yes, I am fluent with it. Would I use it as my primary language? No. Definitely not. It is slow as hell. No way!

Friday, June 19, 2020

There is something wrong with the .NET developers (when it comes to display data on a form)

Note to self: Caption all the caps!

There is something wrong with the .NET developers. There are so many stupid examples flying around when it comes to data binding, data editing and basic CRUD operations. Its just plain wrong.

I’m coming from Delphi background and you cannot even guess how comfortable it is to write database applications within the Delphi IDE. Just put a datagrid, a connection component, a dataset component and a datasource component onto a form. Set grid’s datasource property to the datasource component, set the connection. All you need to write is one line of SQL code. For example “Select * from Products”. You will have products table showing in the datagrid on your form. Yes, you don’t even have to run the application to see the data.


Yes, you’ve heard it right. You don’t have to write code to get the data from server, display it on a grid, edit the data and then post it. Or add/insert new records. All you have to write is a line of SQL code.

We know that the Delphi is the brain child of a genius called Anders Heilsberg. Yes, the same guy who invented C# for the Microsoft. I see examples on the net using C# to retrieve data, edit it, add new records and delete records. All by hand! There are things called data bindings, Entity Framework etc to use data with minimum effort just like we do in Delphi. I’ve seen people using select queries to retrieve the data and then when they edit it they write update queries, they write delete queries… What’s wrong with you people! You don’t have to manually do these tasks. Anders Heilsberg has done it for you already. People are assigning values of text boxes to parameters, getting the data from cells of their DataGridView. You have data in your hands but you are using grid cells… C’mon.

I decided to provide proper examples to people who are learning the C# and database programming. I am also learning the language and its data bound operations for some time. It would be helpful to me when I need to remember the basics. I am tired of inventing the wheel every time I’m in need to display some data on a form.

I will show you a basic list form, a form with editing capabilities, a form with master detail editing functions. All of the necessary CRUD operations without writing much code. There are ways to create forms with data bound components on them with zero coding but it is not like Delphi and you might not have full control over the data. For example, you have to put a save button on a navigator component and then write “db.SaveChanges()” to its OnClick event to save the changes. So, I’m not going to evaluate the “zero code” version. I will implement a few basic CRUD scenarios.

What I’m planning here is to create a basic list form to select some of the records and retrieve the list of selected records to the caller. Not a big deal.



First things first. Let’s create a new WinForms project for this.

I am using Visual Studio 2019 Community Edition for doing the examples. Start Visual Studio and select Windows Forms App from Create New Project form.




 I renamed my app.


Get some packages

Nowadays I'm using Entity Framework Core, but let's go classic (since Core has no proper WinForms support)

Add Entity DataModel

We're ready. Now create a form.

Create a DB connection and a BindingSource.

Show (and automatically dispose) the look up form.

Easy peasy. However this is the default "automatically create colums" grid. Let's put up some make up.

For example, I would like to show "Grup ID" instead of the field name at the header.

Nice!

Let's put the rest.

Here is an example


You don't have to write code for this. This is where I like Delphi much. But don't worry, there is a way for this in our beloved Visual Studio.


The most important thing here is the DataPropertyName which requires the field name (or the property  name of the field in our model class)

We can comment out the make up code now since we fixed it on the form using property editors of DataGridView.

Let't get a navigation bar. I wrote a perfect navigation bar for my Delphi projects and I think I've been using it for more than 18 years.

You can customize it.

Here is what I miss from Delphi again. I added a save changes button yet it does not save the changes. 

We do it by a line of C# code.

Let's put a search text box.

Fix the anchors of the grid to bottom (the important part)

And fix both bottom and top anchors for the text box.

This way our form will adapt to size changes. No code, no effort.

Write some code for our text box on change event so that we can filter the records.

Filter code is a simple lambda expression.

Yet it is powerful.

Here is all the code for our working form. Not much, not less.

Lets add a detail grid (here is the reason why we did not fix the right sides of the grids)

We will create a master/detail relation. Note that I have never wrote code for this in Delphi.

Add a company binding source to our code.

Whenever we click another line on the group grid or press on the navigator buttons a "current item changed" event occurs. I've seen C# coders in the wild who hooks the grid events to do this. You have data in your hands. Why bother with the presentation of it?

We can write another filter for the details grid (company DataGridView)

Voila! It is working!

A Survey of Body Area Networks