An Introduction to Real Time Java Technology: An Architectural Analysis

Real time computing is nothing but association of computing system or application with ‘high speed’, but this is the first side of the Real Time face, as at its’s core level real time computing is all about predictability, the knowledge or the action which system or application will always perform within the stipulated time frame. The time frame involved need not to be very small,although sometimes they are,and the consequences of missing the deadlines are not to be always hazardous,although sometimes they are.The key to whether an application is a real-time one has to do with whether its requirements include temporal constraints.

Hard-Real-Time, Soft-Real-Time, Firm-Real-Time: Glance At Some Fundamentals

Real time systems are classified in various types depending upon systems requirements.

A hard real time system is one in which the system must meet all the required deadlines without missing them or fail. Typically, hard real time systems mostly deals with short latencies which has to be measured in micro or milli seconds.

Hard real time systems are hazardous or not?: Yes it may or may not be dependinng upon the systems or applications nature.

Soft real-time systems are such type of real-time systems which are also associated with time bound but here timing constraints are not expressed as absolute values. In soft real-time tasks, even after the deadline result is not considered incorrect and system failure does not occur.

Soft real time systems are hazardous or not? No, missing deadline is acceptable.

Firm real-time systems are such type of real-time systems which are associated with time bound and the task need to produce the result within the deadline. Although firm real-time system is different from hard real-time system as in hard real-time once deadline is crossed and task is not completed, system fails but in case of firm real-time task even after the passing of deadline, system does not fail.

Unpredictability in a Java Technology-Based Application

A number of factors may render the timing of execution unpredictable and therefore may cause a standard Java task to miss its deadline. Here are the most common.

Operating-system scheduling: In Java technology threads are created by JVM i.e. nothing but Java Virtual Machine but are being scheduled by OS scheduler, so in order for the JVM to provide temporal latency the OS must provide scheduling latency surity as well.

Priority inversion. One hazard in an application in which threads can have different priorities is priority inversion. If a lower-priority thread shares a resource with a higher-priority thread, and if that resource is guarded by a lock, the lower-priority thread may be holding the lock at the moment when the higher-priority thread needs it. In this case, the higher-priority thread is unable to proceed until the lower-priority thread has completed its work — and this can cause the higher-priority thread to miss its deadline.

Class loading, initialization and compilation: The Java language specification requires classes to be initialized lazily — when an application first uses them. Such instantiations might execute user code, creating jitter, a variance in latency, the first time that a class is used. In addition, the specification allows classes to be lazily loaded. Because loading of classes may require going to disk or across the network to find the class definition, referencing a previously unreferenced class can cause an unexpected — and potentially huge — delay. In addition, the JVM has some latitude to decide when, if ever, to translate a class from byte code into native code. Typically, a method is compiled only when it is executed frequently enough to warrant the cost of compilation.

Garbage collection: The primary source of unpredictability in Java applications is garbage collection (GC). The GC algorithms that standard JVMs use all involve a stop-the-world pause, in which the application threads are stopped so that the garbage collector can run without interference. Applications with hard response-time requirements cannot tolerate long GC pauses. Despite a large amount of work in recent years on reducing GC pauses, a so-called low-pause collector is still not enough to guarantee predictability and still may require significant tuning and testing.

Task Types and Deadlines

The Real Time Specification for Java models the real-time part of an application as a set of tasks. The deadline of a task is when the task must be completed. Real-time tasks fall into several types, based on how well the developer can predict their frequency and timing.

The Real Time Specificaton for Java uses this task information in several ways to ensure that critical tasks do not miss thier deadlines.

First, the RTSJ allows you to associate with each task a deadline miss handler. If a task does not complete before its deadline, this handler is called.

Deadline-miss information can be used in deployment to take corrective action or to report performance or behavioral information to the user or to a management application. By comparison, in non-real-time applications, failures may not become apparent until secondary or tertiary side effects arise — such as request timeouts or memory depletion — at which point it may be too late to recover gracefully.

Deadline-miss handling is deferred to a deadline miss handler, like this:

DeadlineMissHandler

or if there is no deadline miss handler then it is very much possible to handle the miss by thread itself also:

Thread Deadlinemiss implementation

How to manage Thread Priorities?

In the pure Real time environment it is very much important to handle thread priorities, these are the very vital or extreme important parameters which needs to be consider.No system can guarantee that all tasks will complete on time. However, a real-time system can guarantee that if some tasks are going to miss their deadlines, the lower-priority tasks are victimized first.

The Real Time Specification for Java defines at least 28 levels of priorities and requires their strict implementation and enforcement.

The problem of priority inversion can undermine the effectiveness of a priority facility. Accordingly, the RTSJ requires priority inheritance in its scheduling. Priority inheritance avoids priority inversion by boosting the priority of a thread that is holding a lock to that of the highest-priority thread waiting for that lock.

This prevents a higher-priority thread from being starved because a lower-priority thread has the lock that it needs but cannot get adequate CPU cycles to finish its work and release the lock. This feature also prevents a medium-priority task that does not depend on the shared resource from preempting the higher-priority task.

In addition, the RTSJ is designed to allow both non-real-time and real-time activities to coexist within a single Java application. The degree of temporal guarantees provided to an activity depends on the type of thread in which the activity is executing: java.lang.Thread or javax.realtime.RealtimeThread thread types.

Standard java.lang.Thread (JLT) threads are supported for non-real-time activities. JLT threads can use the 10 priority levels specified by the Thread class, but these are not suitable for real-time activities because they provide no guarantees of temporal execution.

The RTSJ also defines the javax.realtime.RealtimeThread (RTT) thread type. RTTs can take advantage of the stronger thread priority support that the RTSJ offers, and they are scheduled on a run-to-block basis rather than a time-slicing basis. That is, the scheduler will preempt an RTT if another RTT of higher priority becomes available for execution.

Memory-Management Extensions

One of the problems with automatic memory management in standard virtual machines (VMs) is that one activity may have to “pay for” the memory-management costs of another activity. Consider an application with two threads: a high-priority thread (H) that does a small amount of allocation, and a low-priority thread (L) that does a great deal of allocation.

If H is unlucky enough to run at a time when L has consumed almost all the available memory in the heap, the garbage collector may kick in and run for a long time when H goes to allocate a small object. Now, H is paying — in the form of an incommensurately long delay — for L’s enormous memory consumption.

The RTSJ provides a subclass of RTT called NoHeapRealtimeThread (NHRT). Instances of this subclass are protected from GC-induced jitter. The NHRT class is intended for hard-real-time activities.

To maximize predictability, NHRTs are allowed neither to use the garbage-collected heap nor to manipulate references to the heap. Otherwise, the thread would be subject to GC pauses, and this could cause the task to miss its deadline. Instead, NHRTs can use the scoped memory and immortal memory features to allocate memory on a more predictable basis.

Sample NHRT implementation

Memory Areas

The RTSJ provides for several means of allocating objects, depending on the nature of the task doing the allocation. Objects can be allocated from a specific memory area, and different memory areas have different GC characteristics and allocation limits.

For more details for Scoped memory areas refer: https://javapapers.com/core-java/what-is-scoped-memory-and-why-java-uses-it/#:~:text=In%20java%20scoped%20memory%2C%20the%20memory%20is%20preallocated%20for%20the%20task.&text=If%20some%20objects%20are%20needed,to%20a%20different%20memory%20mechanism.

For each thread, there is always an active-memory area, called the current allocation context. All objects allocated by a thread are allocated from this area. The current allocation context changes when you execute a block of code with a specific memory area.

The memory area API contains an enter (Runnable) method that causes the specified task to be executed using that area as the current allocation context. Therefore, even if your code uses third-party libraries that allocate memory, you can still use that code with scoped-memory areas -- and all the temporary objects allocated by that code will go away when the current allocation context is finalized.

Advanced: Communication Between Threads

One of the advantage of the RTSJ is that it allows real-time and non-real-time applications to coexists within a single VM.

But the main challenge with this is communication between threads, as we all know that any communication requires one and the other resources, here also the communication between the threads involves memory and this involvement of memory in the communication poses challenge.

To give a resolution to this challenge obovious mechanism for communication between threads is a queue, as queue is having double ends, one thread is putting data onto the queue (enqueue), and the other thread is removing data from the queue (dequeue).

An imaginary use-case: Imagine that an RTT is putting data onto a queue every 10 milliseconds, and a non-RTT is consuming the data from the queue. What happens if the non-RTT does not get enough CPU time to drain the queue? The queue will grow, and you have a choice of three bad options:

The queue could be allowed to grow without bound, potentially running out of memory.

Data from the queue must be discarded.

The RTT will have to block.

The first option is impractical, and the last option is unacceptable. The predictability of real-time activities should not be affected by the behavior of lower-priority activities. The RTSJ supports several types of non-blocking queues for communicating between threads. When they are used between real-time and non-RTTs, there are some restrictions on the memory area in which elements must reside.

For more details regarding Real Time Java programming visit official site : https://jcp.org/en/jsr/detail?id=1

Full Stack Developer || Application and Software Developer || DevOps Engineer || Ex-Oracle || M.Tech. CSE