Virtualization strategies are distinctly varied between the cloud/server computing and the edge/embedded computing markets. Generally speaking, virtualization offers the following benefits:
- Run multiple operating systems on a single computer, including Windows, Linux, QNX, Android, bespoke RTOS and more
- Increase energy efficiency, reduce hardware requirements and thereby reduce overall capital expenditure
- Determine highest availability and performance for enterprise applications
- Use computing resources efficiently
The following resources, in no particular order, can lend themselves to virtualization:
- The execution unit (CPU): By virtualizing the CPU, you can run multiple virtual CPUs on top of one physical CPU or core. This is done by time-slicing (or using a priority-based algorithm) the real CPU and letting each virtual CPU take some fraction of the real processor cycles.
- Memory: By virtualizing memory you can split up the physical memory so that multiple partitions can use some part of the real memory. An operating system running in such a partition might also utilize virtual memory to implement processes. In those cases, you have three levels of memory hierarchy: one for the hypervisor, one for an OS running in a virtual board, and one for the applications running in a process.
- Devices: When multiple virtual boards need to share devices such as serial ports, Ethernet ports, graphics, and so forth, you need to virtualize the devices as well. This is typically done by having a well-defined interface in a partition so that the OS running there can do API calls instead of physically access the device. The actual code that handles the device can be in two places, either in the hypervisor or in another guest operating system
Sasken’s semiconductor offerings are particularly tailored for the embedded computing market, which is estimated to register a CAGR of 6.2% and garner $236.5 billion by 2022. Therefore, I would like to dwell on the benefits our customers can accrue owing to virtualization.
Some particular benefits of virtualization in the embedded computing market are being realized in the automotive segment today. Automotive electronics (head units, instrument clusters, ADAS applications running on QNX, IVI/media/entertainment clusters running on Android VM) hosted on a hypervisor, are good use cases. The two virtual machines (VMs) leverage the necessary secure IPC mechanisms as well as sand box like capabilities of auto OEMs and third party IVI providers to collaborate/leverage single on board compute infrastructure.
Unlike servers or compute-centric systems, one key design metric for embedded systems is performance of the system per watt of power dissipation. That is, the system should be optimized to extract the best possible performance within a given power budget. Usually the power budget of embedded systems is more constrained than that of the servers or compute-centric systems. While portability and flexibility are important, often they are not the number one concern. As such, the bare-metal hypervisor approach offers the best virtualization solution for embedded systems. In summary, while the OS-hosted approach offers the greatest application and guest OS portability, the bare metal hypervisor approach offers the best performance and the lowest virtualization overhead.
AMP (asymmetric multiprocessing), SMP (symmetric multiprocessing), supervised AMP, hypervisors are one of many strategies to create virtualization and different applications require different approaches. Besides the fundamental challenge of dividing up the code to execute in parallel, there are many other issues that projects, which are adopting multicore chips, are faced with:
- Run-time support for OS configuration, resource sharing, and booting
- Communication between cores (IPC)
- Development tools support for configuration and prototyping, analysis, diagnosis, and testing
For More Information Visit: http://www.sasken.com