With improved processing and graphics performance as well as energy efficiency and broad scalability, the 4th generation Intel® Core™ processors with its new microarchitecture provide an attractive solution for a broad array of mid-range to high-end embedded applications in target markets such as medical, embedded computing, industrial automation, infotainment and military.
This whitepaper gives engineers a closer look into the architectural improvements of the new microarchitecture and delivers the answers as to how they can integrate these most efficiently into their appliances.The 4th generation Intel® Core™ processors serve the embedded computing space with a new microarchitecture which Kontron will implement on a broad range of embedded computing platforms. Based on the 22 nm Intel® 3D processor technology already used in the predecessor generation, the processors, formerly codenamed ‘Haswell’, have experienced a performance increase which will doubtlessly benefit applications. Beside a 15% increased CPU performance especially the graphics has improved by its doubled performance in comparison to solutions based on the previous generation processors. At the same time, the thermal footprint has remained practically the same or has even shrunk.
“Qseven does make it easier to design single board computer with because bringing up an Android system is not easy, contrary to popular opinion. Everybody has an Android system but those also drive a significant amount of volume in mobile and they put a lot of investment and people to make that happen. You cannot really do that in other spaces. Bringing up a stable platform is very important, and just having that modular architecture makes it so you do not really have to go and change things around too much.”
“Bringing up an Android or Linux BSP is non trivial,” says Ravi Kodavarti, Senior Director of Business Development and Strategy at Inforce Computing, Inc. in Fremont, CA (www.inforcecomputing.com). “Say our application has a 6440 carrier board, and on top of it we put a Qseven COM. However, Wi-Fi is really on the COM and the GPS chip is on the COM as well. These are well-tested single board computer interfaces not just from a hardware standpoint but also from a software standpoint, and writing these drivers and bringing these up is a pain. Every time you want to do that on a custom board, it is reinventing the work.”
The usage model of RapidIO required providing support for memory-to-memory transactions, including atomic read-modify-write operations. To meet these embedded solutions requirements, RapidIO provides Remote Direct Memory Access (RDMA), messaging, and signaling constructs that can be implemented without software intervention. For example, in a RapidIO system, a processor can issue a load or store transaction, or an integrated DMA engine can transfer data between two memory locations. These embedded solutions are conducted across a RapidIO fabric, where their sources or destination addresses are located, and typically occur without any software intervention. As viewed by the processor, they are no different than common memory transactions.
RapidIO was also designed to support peer-to-peer transactions. It was assumed multiple host or master processors would be in the system and that those processors needed to communicate with each other through shared memory, interrupts, and messages. Multiple processors (up to 16K) can be configured in a RapidIO network, each with their own complete address space.
As embedded devices continue to increase in complexity, the software development task has become the largest element of the typical project budget. Graphical interfaces, network protocols, and data security are just a few of the new requirements that design teams can find added on top of their custom application software. With this growing software burden along with customer demand for faster response times and instant data access, operating systems have become an essential element to organize and prioritize the software and hardware interaction routines. Unlike the desktop environment where only a few operating systems prevail, embedded designers have hundreds of options and the right choice depends on the special needs and requirements of each project.
It specifies provisioning and management for negotiating video capabilities between a source and sync device, standard video transcoding schemes built on H.264, transport and control schemes, packetization, and content protection based on High-bandwidth Digital Content Protection (HDCP) 2.0. Many of the device and in-vehicle discovery components of the protocol are built around the previously released Wi-Fi Direct specification. The WI-FI CERTIFIED Miracast specification enables car manufacturers to wirelessly mirror smartphone screens to in-dash LCDs, creating an immediately personalized interface in the dashboard. Additionally, this in-vehicle standards-based technology allows consumers to safely control smartphones through the dashboard so they can answer calls and check text messages.
Secure cloud computing is more than just the network; it is also important to focus on the identity and authentication management to make sure each piece of data in a cloud is being accessed by the proper individual. This is roughly akin to needing an ID card and a retina scan to enter a building and also needing additional authentication factors to access a file in a drawer. “So much client focus in the embedded computer is about the network,” Cloyd says. “However, you cannot just focus on a network-based, umbrella approach to protect systems. Data is the key to the Embedded Computer kingdom so you have to protect the application, as well as the traditional network boundaries.
Virtualization trends in commercial computing offer benefits for cost, reliability, and security, but pose a challenge for military operators who need to visualize lossless imagery in real time. 10 GbE technology enables a standard zero client solution for viewing pixel-perfect C4ISR sensor and graphics information with near zero interactive latency.
For C4ISR systems, ready access to and sharing of visual information at any operator position can increase situational awareness and mission effectiveness. Operators utilize multiple information sources including computers and camera feeds, as well as high-fidelity radar and sonar imagery. Deterministic real-time interaction with remote computers and sensors is required to shorten decision loops and enable rapid actions.A zero client represents the smallest hardware footprint available for manned positions in a distributed computing environment. Zero clients provide user access to remote computers through a networked remote desktop connection or virtual desktop infrastructure. Utilizing a 10 GbE media network for interconnecting multiple computers, sensors, and clients provides the real-time performance and image quality required for critical visualization operations. The cost of deploying a 10 GbE infrastructure is falling rapidly and 10G/40G has become the baseline for data center server interconnect. Additionally, deploying common multifunction crew-station equipment at all operator positions brings system-level cost and logistics benefits. The following discussion examines the evolution to thinner clients and the path to a real-time service-oriented architecture, in addition to looking at zero client benefits and applications.
Evolution to thinner clients
For military C4ISR, capabilities provided by legacy stovepipe implementations are being consolidated into networked multifunction systems of systems. To accomplish this, open standards and rapidly advancing technologies for service-oriented architectures are being leveraged (Figure 1). For crew-station equipment, this drives an evolution from dedicated high-power workstations toward thinner client equipment at user locations. Computing equipment is being consolidated away from the operators into one or more data centers. This leaves the crew station with a remote connection to system resources, but does not ease the requirement for high-performance access to visual information. 10 GbE provides the client/server connection performance necessary for real-time remote communication.
Figure 1: Client/server evolution: Increasing communications bandwidth enables more service-oriented computing and “thinner” clients.
Workstations at operator positions normally run software applications locally and provide dedicated resources for data and graphics processing. Server-based data processing and networked sensor distribution systems have moved much of the application processing away from the operator. This can simplify the job of system administration and maintenance and enables multiple users to access the same capabilities. However, much of the processing for presenting images to operators can be unique to the individual needs for varying roles at each position.
Thin clients can be utilized to provide dedicated graphics and video processing horsepower for user-specific visualization operations such as windowing, rendering, and mixing multiple data and sensor sources. Dedicated local graphics processing power can be important for critical real-time operations or for interfacing to servers without high-performance graphics capabilities. This makes a thin “networked visualization client” a flexible option for multifunction crew stations that must interface with both legacy and newer service-oriented systems.
For commercial computing systems, a major push is underway to move high-performance graphics capability into the data center servers. This can be implemented via dedicated workstations for each crew station, virtualized compute engines with dedicated graphics for each crew station, or completely virtualized environments with networked image distribution. Virtualization provides a means to share CPU and GPU compute cycles between multiple users, gaining efficiency from higher utilization of system hardware resources. However, for mission-critical C4ISR systems, a deterministic Quality-of-Service level for performance, reliability, and security must be maintained.
For systems with both computing and graphics processing located away from the operator, zero clients provide network-attached displays with audio and user input devices (keyboard, mouse, and touch screen). Minimizing size, weight, and power at the operator position brings many benefits, but performance depends on the remote visualization processing capabilities and the communication channel. To match workstation performance, a consistent human-computer interaction latency of less than 50 ms must be provided.
Path to a real-time service-oriented architecture
System architects need a graceful technology insertion path that leverages the benefits of thinner clients (Figure 2). One approach for centralizing computing equipment while maintaining performance is to simply move the workstations to the data center and extend the interfaces to the display and input devices. This maintains the dedicated computing resources for critical operations. Video and device interface extension can be accomplished via extenders or switch matrices to provide connections between operators and computers.
Figure 2: Crew-station evolution to a service-oriented architecture
A more flexible approach is to utilize a standard network to support highly configurable access to all workstation resources from any operator position. With this approach, any user can connect to any image source and user screens can be shared with collaborative remote displays or other users. This also enables growth to a service-oriented “cloud” architecture that follows the trend for general-purpose IT and data processing systems. However, commercial IT products do not always meet the performance, reliability, security, or logistics requirements for mission-critical C4ISR systems.
To leverage this computing trend for real-time applications, a standard 10 GbE media network can be utilized to connect multiple zero clients to multiple remote graphics and sensor sources. Lossless distribution is supported for high-quality text, dynamic 2D/3D graphics, HD video, radar, and sonar imagery. Compositing multiple sources onto a single screen can be performed at the zero client or by networked video processing services. Near-zero latency interaction and video distribution are now possible and support deterministic performance and real-time dynamic visualization at any operator position.
One full-resolution (1,920 x 1,200) loss-less channel at 60 Hz with 24-bit color requires 3.3 Gbps of bandwidth. Therefore, one 10 GbE connection can support a dual-head crew station at full frame rate with audio and USB support. However, many visual applications require no more than a 30 Hz update rate (including 1,080p/30 HD full motion video), which reduces the bandwidth to 1.7 Gbps per channel. This enables triple-head crew stations with audio and USB support over a single 10 GbE connection. Dual Ethernet ports at the zero client can also be provided to support more video channels, higher frame rates, and/or redundant connections.
Zero client benefits
Compared to workstations, zero clients provide several benefits, including lower TCO, reduced SWaP, higher system availability, and more system security and agility.
Reduced total cost of ownership
Zero clients provide the smallest, simplest, and most maintainable equipment available for the operator position. This means lower initial investment costs as well as lower operating and maintenance costs throughout the system life cycle. System modularity and standard interfaces support seamless technology refresh as new computing and display equipment becomes available. 10 GbE has been widely adopted for data centers and standard component costs are declining rapidly. When compared to legacy stovepipe systems, networked systems also greatly reduce the amount of dedicated cabling required.
Reduced size, weight, and power
Only video, audio, and USB encoding/decoding functions are required with a zero client. These are packaged as small dongles or integrated into the display. Small packaging enables new options for lightweight operator consoles with increased ergonomics, as well as reducing noise and the burden on cooling systems for manned areas.
High system availability
System uptime and reliability benefit from consolidating all computing elements into managed data centers. Common equipment at multiple operator positions and redundant network connections support rapid recovery from computer, client, or network equipment failures.
High system security
Security risks are reduced through centralized administration and access authentication at the data center. Additionally, stateless zero client equipment outside the data center and encrypted communications between all components assure system confidentiality and integrity.
System agility
Systems using common crew-station equipment can be reconfigured by software for different mission roles and objectives. Additional clients can be added quickly to extend the system. Also, as computing systems evolve with new virtual desktop infrastructures, today’s investment in zero client equipment is preserved through standard interfaces for video, audio, and user input devices including DVI, PC audio, and USB.
Applications of a zero client
In addition to the benefits of a zero client, the technology’s agility also enables a range of applications using common equipment. For example, remote crew stations can now be smaller, lighter, and more versatile, and operator equipment can be located at remote locations not previously possible. Noisy, heat-generating computing equipment can be moved away from operator positions.
Another application highly suited to zero client utilization is the multifunction crew station. Common crew-station equipment can be used to access multiple computers and sensor sources under secure software control. This supports the capability for dynamic access to multiple systems from a single location. Systems can be rapidly reconfigured for different mission objectives, operating roles, or failure recovery.
Collaborative and remote displays also benefit from zero client usage. Unmanned displays can be attached to the network for sharing real-time visual information for dissemination and collaboration. Large area displays for several viewers can receive multiple feeds with full performance. Additionally, selected sources can be compressed and transmitted through secure routers for wider area distribution.
Using zero client technology for networked multifunction crew stations enables the integration of legacy capabilities into a consolidated operating environment as well as the development of new concepts of operation. One example of this is Barco’s zero client technology, which brings the benefits of state-of-the-art computing architectures into mission-critical C4ISR systems involving advanced visualization.
Mission-critical solution
Leveraging commercial computing trends and standards provides significant cost and capability benefits. However, the level of real-time performance, mission assurance, and information assurance required for mission-critical C4ISR systems must be achieved. Zero client technology enabled by 10 GbE provides the necessary pixel-perfect viewing of graphics and sensor information for these demanding applications.
Connectors drive many of the key gaming platform in standards used in open architecture embedded computing platforms as many types of connectors are used in a typical critical embedded system (such as backplane connectors, mezzanine connectors, I/O connectors, and specialty connectors). They also hold the key gaming platform to realizing advancing system capabilities, and in this regard they have run into problems. This feature discusses the trends, challenges, and future of connectors for critical Embedded Systems.
Industrial Grade cards use SLC (Single Level Cell) NAND flash memory which has the highest level of endurance. The tradeoff of using SLC NAND versus the lower endurance MLC (Multi Level Cell) and TLC (Tri Level Cell) NAND is its significantly higher cost. Typically Industrial Grade products are rated at >2 Million endurance cycles per logical block which is ~20 times the MLC NAND rating and up to 10,000 times better than the TLC NAND based products.
Many of today’s embedded systems incorporate multiple analog sensors that make devices more intelligent, and provide users with an array of information resulting in improved efficiency or added convenience. The Analog Front End (AFE), allowing the connection of the sensor to the digital world of the MCU, is often an assumed “burden” in designing sensor interface circuits. However, the latest concept in a configurable AFE, integrated into a single package, is helping systems designers overcome sensor integration challenges associated with tuning and sensor drift, thereby reducing time to market. The following embedded discussion examines how the versatility of such a technology allows the designer to tune and debug AFE characteristics on the fly, automate trimming and adjust for sensor drift, and add scalability to support multiple sensor types with a single platform.