Common Kernel Development for Heterogeneous Linux Platforms

DOI : 10.17577/IJERTCONV3IS07013

Download Full-Text PDF Cite this Publication

Text Only Version

Common Kernel Development for Heterogeneous Linux Platforms

D. Dhivakar,

P.G Student,

M.E Computer Science and Engineering, National College of Engineering, Anna University,Chennai, India.

L. K. Indumathi,

P.G Scholar,

M.Tech Computer and Information Technology, Manonmaniam Sundaranar University,

India.

Abstract-This paper introduces common kernel which will support to the heterogeneous linux platforms. The main objective of this paper is to create a common kernel that work very well on all the UNIX like platforms and it does not require any separate kernel modules for each Linux operating system. By this implementation, the Operating System development time will be reduced and kernel efficiency will be improved. The various types of kernel features are synchronized and Security mechanism will also be maintained properly. This kernel includes all the basic modules such as memory management, process management, interrupt request and response handling, file system, assembler, exception handler codes are fetched into a single batch file is called as Linux Kernel Library (LKL).

Keywords: Portable kernel, linux kernel modules, Grub, LKM, linux batch file Generation, Monolithic kernel and object file.

1 INTRODUCTION:

The kernel is a program that constitutes the central core of an operating system. It has complete control over everything that occurs in the system. Linux kernel is a UNIX like operating system kernel, so Linux operating systems are called as UNIX like operating system. The kernel is the first part of the operating system to load into memory during booting (i.e., system startup), and it remains the entire duration of the computer session because its services are required continuously. Kernel is also providing all the essential services needed by the other parts of the operating system and various application programs. Linux operating systems have two spaces for running the process there are 1.address space and 2.kernel space. The kernel performs its tasks, such as executing processes and handling interrupts, in kernel space, other tasks such as writing text in a text editor or running programs in a GUI (Graphical User Interface) are done in user space. This separation of two address space is prevent user data and kernel data from interfering with each

other and thereby diminishing performance or causing the system to become unstable (and possibly crashing). The kernel provides basic services to all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These are all done with help of the system call [3].

2 TYPES OF KERNEL:

Linux is a monolithic kernel. This part describes types of kernel. Kernel is classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels [3]. Each has its own advocates and detractors.

2.1 Monolithic kernels, which have traditionally used by Unix-like operating systems, it contains all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk

drives, video cards and printers). Modern monolithic kernels, such as Linux and FreeBSD, fall into the category of Unix-like operating system, major feature of monolithic kernel is ability to load modules at runtime, By this capability the kernel modules are easy to extend, so it minimize the amount of code running in kernel space.

    1. A Microkernel usually provides only minimal services, such as defining memory address spaces, inter process communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Mac OS X, MINIX and QNX.

    2. Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would were it in user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting (such as Linux). Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. Dragonfly BSD, a recent fork (i.e., variant) of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels

and their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program. Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Linux and one for Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows applications [2] and [3].

Fig.1 Different Kernel Structure

3 EXISTING PROBLEM:

Linux distribution (distros) is classified into 3 distributions there are 1) Fedora 2) Opensuse and 3) Debian.Each distribution contains different types of operating system such as debian which holds ubuntu, cent os, linux mint and etc. for each and every distributions of operating system contains customizable kernel structure they are not following the general kernel source tree structure. They are using dependent code for developing and dispatch the kernel versions it leads to an hardware and software platform issues so the open source computer users

not able to achieve common kernel development group (www.linuxfoundation.org) features periodically [5].

4 PROPOSED SYSTEM MODEL:

To overcome the existing system problem this paper designing a common kernel source tree to support heterogeneous linux platforms. This proposed Kernel contains all the basic modules and its functions. The Proposed kernel is designed by software tools: 1.Nasm 2.GRUB and 3.Gcc compiler. Kernel development is a large task. To develop a kernel understands the interfaces (how to create software that interfaces with and manages the hardware). A kernel is designed to be a central core to the operating system that enables the logic to manage the resources (hardware devices). One of the most important system resources is to manage the processor.

Fig.2 Proposed Kernel Structure

The Kernel allocates the time for each operation or task, also it handles the interrupts. This implies multitasking, achieve with help of yield function allocate processing time to the next runnable process or task. There is preemptive multitasking, where the system timer is used to interrupt the current process to switch to a new process: a form of check switch, this more guarantees that a process can be given a chunk of time to run. There are several scheduling algorithms used in order to find out what prcess will be run next. The simplest scheduling algorithm is 'Round Robin'. In this just get the next process in the list, and choose that to be runnable. A more complicated scheduler involves 'priorities', where certain higher-priority tasks are allowed more time to run than a lower-priority task. Even more complicated still is a Real-time scheduler. This is designed to guarantee that a certain process will be allowed at least a set number of timer ticks to run. Ultimately, this number one resource calculates to time.

The next most important resource in the system is memory. Code the kernel to be memory efficient, CPU efficient by using memory to store caches and buffers to 'remember' commonly used items instead of looking them up. The best approach would be a combination of the two: Strive for the best memory usage, while preserving CPU time. The last resource of kernel needs to manage is hardware resources. This includes Interrupt Requests (IRQs), which are special signals that hardware devices like the keyboard and hard disk can use to tell the CPU to execute a certain routine to handle the data that these devices have ready. Another hardware resource is Direct Memory Access (DMA) channels. A DMA channel allows a device to lock the memory bus and transfer it's data directly into system memory whenever it needs to, without halting the processor's execution. By this way to improve performance on a system, a DMA-enabled device can transfer data without support the CPU, and then can interrupt the CPU with an IRQ, telling it that the data transfer is complete: Soundcards and ethernet cards are known for using both IRQs and DMA channels. The third hardware resource is in the form of an address, like memory, but it's an address on the I/O bus in the form of a port. A device can be configured, read, or given data using I/O port(s).Grand Unified Bootloader (GRUB) to load the kernel into memory. GRUB needs to be directed to a protected mode binary image: this 'image' is our kernel, in the format of

.bin file.

    1. Kernel Entry point creation:

      The kernel's entry point is the piece of code that will be executed FIRST when the boot loader calls kernel. This chunk of code is almost always written in assembly language because no need any compiler and interpreter for conversion and directly access the hardware devices with help of single instructions, such as setting a new stack or loading up a new GDT, IDT, or segment registers, are things that simple compare than other language codes. Save the assembler code in this one file, and put all the rest of the sources in several C source files. Creating a header files to include all the prototypes which will serve as the main() program. It also contains the functions of the main program [4].

    2. GDT :

      A vital part of the processor is various protection measures is the GDT (Global Descriptor Table). The GDT defines base access privileges for memory. Use an entry in the GDT to generate segment violation exceptions that give the kernel an opportunity to end a process. Most of the modern operating systems use a mode of memory called "Paging" to do this, it alot more versatile and allows for higher flexibility. The GDT is also capable of defining a Task State Segments (TSSes). A TSS is used in hardware- based multitasking used to enable the multitasking functions.

      Note that GRUB already installs a GDT, This involves building our own GDT, telling the processor where it is, and finally loading the processor's CS, DS, ES, FS, and GS registers with our new entries. The CS register is also known as the Code Segment. The Code Segment tells the processor which offset into the GDT that it will find the

      access privileges in which to execute the current code. The DS register also known as Data segment register used to defines the access privileges for the current data. ES, FS, and GS are simply alternate DS registers. The GDT itself is a list of 64-bit long entries. These entries define where in memory that the allowed region will start, as well as the limit of this region, and the access privileges associated with this entry. One common rule is that the first entry in your GDT, entry 0, is known as the NULL descriptor. No segment register should be set to 0, otherwise this will cause a General Protection fault, and is aprotection feature of the processor. The General Protection Fault and several other types of 'exceptions' will be explained in detail under the section on Interrupt Service Routines (ISRs).

      Each GDT entry also defines whether or not the current segment that the processor is running in is for System use (Ring 0) or for Application use (Ring 3). There are other ring types, but they are not important. Major operating systems today only use Ring 0 and Ring 3. As a basic rule, any application causes an exception when it tries to access system or Ring 0 data. This protection exists to prevent an application from causing the kernel to crash. As far as the GDT is concerned, the ring levels here tell the processor if it's allowed to execute special privileged instructions. Certain instructions are privileged, meaning that they can only be run in higher ring levels [7]. Examples of this are 'cli' and 'sti' which disable and enable interrupts, respectively. If an application were allowed to use the assembly instructions 'cli' or 'sti', it could effectively stop system kernel from running Now that our GDT loading infrastructure is in place, and we compile and link it into our kernel.

    3. IDT:

      The Interrupt Descriptor Table, or IDT, is used in order to show the processor what Interrupt Service Routine (ISR) to call to handle either an exception or an 'int' opcode (in assembly). IDT entries are also called by Interrupt Requests whenever a device has completed a request and needs to be serviced. Each IDT entry is similar to that of a GDT entry. Both have hold a base address, both hold an access flag, and both are 64-bits long. The major differences in these two types of descriptors are in the meanings of these fields. In an IDT, the base address specified in the descriptor is actually the address of the Interrupt Service Routine that the processor should call when this interrupt is 'raised' (called). An IDT entry doesn't have a limit, instead it has a segment that you need to specify. The segment must be the same segment that the given ISR is located in. This allows the processor to give control to the kernel through an interrupt that has occurred when the processor is in a different ring (like when an application is running) [9].

    4. ISR:

      Interrupt Service Routines, or ISRs, are used to save the current processor state and set up the appropriate segment registers needed for kernel mode before the kernel's C-level interrupt handler is called. This can all be handled in

      about 15 or 20 lines of assembly language, including calling our handler in C. We need to also point the correct entry in the IDT to the correct ISR in order to handle the right exception [13]. An Exception is a special case that the processor encounters when it cannot continue normal execution. This could be something like dividing by zero: The result is an unknown or non-real number, so the processor will cause an exception so that the kernel can stop that process or task from causing any problems. If the processor finds that a program is trying to access a piece of memory that it should not, it will cause a General Protection Fault. When it set up paging, the processor causes a Page Fault, but this is recoverable and maps a page in memory to the faulted address [7] and [8].

  1. THREE WAYS TO EXECUTE THE PROPOSED KERNEL:

    1. GRUB

      Using grand unified boot loader tool to change the entire kernel already (installed in the system/changing specific modules/removing the kernel modules. These are all the operations done with help of grub commands such as find, kernel, boot, etc.

    2. LKM

    1. IRQ and PIC

      Interrupt Requests or IRQs are interrupts that are raised by hardware devices. Some devices generate an IRQ when they have data ready to be read, or when they finish a command like writing a buffer to disk, for example the device will generate an IRQ whenever it wants the processor's attention. IRQs are generated by everything from network cards and sound cards to your mouse, keyboard, and serial ports [11].

    2. Build Batch File:

      A batch file is simply a collection of DOS commands to link all the kernel modules programs into a single file. So no need to run all the files individually just run this batch file to execute the developed kernel. It just assemble and linking all the modules of the kernel in single file save extension with .bat format.

    3. Sample code:

echo Now assembling, compiling, and linking your kernel:

nasm -f aout -o start.o start.asm

rem Remember this spot here: We will add 'gcc' commands here to compile C sources

gcc -Wall -O -fstrength-reduce -fomit-frame-pointer – finline-functions -nostdinc -fno-builtin -I./include -c -o main.o main.c

gcc -Wall -O -fstrength-reduce -fomit-frame-pointer – finline-functions -nostdinc -fno-builtin -I./include -c -o scrn.o scrn.c

gcc -Wall -O -fstrength-reduce -fomit-frame-pointer – finline-functions -nostdinc -fno-builtin -I./include -c -o gdt.o gdt.c

gcc -Wall -O -fstrength-reduce -fomit-frame-pointer – finline-functions -nostdinc -fno-builtin -I./include -c -o idt.o idt.c

LKM stands for Loadable Kernel Module. This is used to reconfigure the kernel source tree in the system. So make the kernel file as a loadable kernel module function [14] and [15].

5.3 RPM AND .DEB FORMAT

Convert the kernel modules into a.rpm(repository package Management) and .deb(debian) file format structure. The fedora and opensuse distributions are support .rpm package format but debian distribution operating system supports only .deb package. The .rpm kernel packages are converting into a .deb format easily with help of alien package tool. In this method just double click the .rpm file to install the kernel [9].

6 RESULT

This is the output for executing a developed kernel in different linux platforms using Grand unified boot loader method.

Figure 3: Find the kernel modules are stored inside the system or not with help of find command to check the batch file.

General Format: #Find path directory

Find /root/kk/build.bat then execute the kernel bin file using the command kernel

General Format: #kernel filename.bin

Load the kernel before execute the boot command

This figure 4 screen shows Description about the old kernel and new kernel details.

REFERENCES:

Fig.3 Load the Developed Kernel

Fig.4 Boot the kernel and display the output

7 CONCLUSION AND REWORK:

This paper proposed a general kernel structure of the source tree for the heterogeneous linux platforms. It provides platform independent and achieved portability of the system software, so no need to create a customizable kernel structure for each and every distribution ,it reduces the operating system development time, improves platform independent, achieve all the kernel features and security integration synchronously. The open source community group no need to focus on dependent code for developing a separate kernel. Future work will be going to implement this same kernel in windows platform also.

  1. Ohloh source code analysis for linux kernel 2.6.34, Available from http://www.ohloh.net/p/linux/analyses/122683 0.

  2. Open source kernel development team http://www.kernel.org

  3. Kernel types available from http://www.en.wikipedia.org/wiki/Linux_kern el

  4. Kernel basics available from http://www.kernelnewbies.org

  5. Octavian Purdila, Lucian Adrian Grijincu, and Nicolae Tapus, LKL

    : The Linux Kernel Library ,IEEE 2005.

  6. Muli Ben-Yehuda, Eric Van Hensbergen and Marc Fiuczynski, Minding the Gap: R&D in the Linux Kernel IEEE 2010.

  7. Yasonari Goto, Kernel Based Virtual Machine Technology, Fujitsu Science and Technology J., Vol 47., No 3 July 2011.

  8. Andrew Baumann, Paul Barham, Pierre-Evariste Dagand, Tim Harris, Rebecca Isaacs,Simon Peter, Timothy Roscoe, Adrian Schüpbach, and Akhilesh Singhania, The Multikernel: A new OS architecture for scalable multicore systems, Microsoft Research 2014. Robert Berger, How to make a GNU/Linux kernel patch, W.E-2.1 2012.

  9. Component-based Operating System Kernels,

    IEEE 2007.Robert love, Linux Kernel Development,Third edition, publisher Addison-wesely, ISBN : 978-0-672-32916-6.

  10. Linux Kernel Development Second Edition, By Robert Love, publisher – Sams Publishing, January 12, 2005 and ISBN: 0-672- 32720-1.

  11. NASM tool available from http://sourceforge.net/projects/nasm/

  12. Linux kernel Research & Development http://www.theregister.co.uk/2010/02/24/ linux_kernel_randd_estimate_u_of_oviendo/

  13. Kernel history http://www.kernel.org/pub/linux/kernel/people

/gregkh/kernel_history/

Leave a Reply