Configurable Media Processor: Wireless Multimedia Solution

Wireless multimedia relies on sophisticated video software/server technology, while streaming video and audio generation rely on complex processing techniques. One product will undoubtedly benefit from wireless multimedia technology, which is the Personal Digital Assistant (PDA). However, different PDAs use different levels of microprocessor performance, such as some video streams that support low frame rates and low resolution with software, while others do not support any type of video stream. A system that supports PDAs with high quality two-way video communication requires strong system computing power.


Figure 1. The evaluation structure of the Mediaworks configurable processor architecture, implemented with the Altera APEX 20K1500 FPGA.

One way to improve performance is to use common small interface standards such as PCMCIA cards and CF cards, which are supported by the latest PDAs. Advanced video and wireless processing systems can be developed using the standards of PCMCIA and CF circuit cards. The biggest problem with providing excellent streaming video products for PDAs is that the above interface standards cannot meet the high data bandwidth requirements of high-resolution video, and video data compression algorithms can solve problems partially or completely. However, streaming video codecs, such as MPEG4, have been developed for systems with relatively unlimited bandwidth, such as PC processing at several GHz, which does not meet the quality, cost, power, and performance requirements of a large number of wireless multimedia devices on the market. Therefore, the development of video codecs for accelerating streaming video complex processors has become a key issue.
This article describes several ways to implement high quality streaming video with PDA. MediaWorks built a configurable media processor that improved the design of programmable logic based and developed a complete set of optimized solutions. The configurable media processor also enables the hardware and software co-design process, which is critical to engineering productivity and accelerates the design and development of video codecs with different performance.
PDA devices all have a screen to view graphics and a speaker and microphone to recover captured audio, typically not integrated with cameras for video capture. Considering the cost relationship, the most economical processors are used. These processors usually do not support bidirectional streaming video. Therefore, additional processors and compression/decompression hardware and software are required. This paper attempts to solve the problem of MPEG4 video capture, transmission and playback in existing PDA devices.
In the design, the first step is to increase the video capture capability by developing a PCMCIA-based VGA resolution camera with sensors that can operate at a sufficient frame rate and image size. Although most PDAs cannot display a full VGA image, the entire image can be viewed by transmitting the image to a PC user over the Internet. The PCMCIA interface standard is easy to develop, and the Compcat Flash specification can be used as a backup. The basic structure is to transfer all image data to the PDA through the PCMCIA bus, encode the output image with software, and decode the input image.
The initial result is a frame rate of 7-12 frames per second, suitable for QCIF (176 & TImes; 144) images (images seen by local users and images being sent to local users). These results demonstrate the feasibility of this concept. However, we feel that to meet the requirements of delivering images to PC users, the image size should be larger and the frame rate should be faster to make the viewing smoother. The next generation of design should involve structural changes to solve the above problems.
Because of the need to add video capabilities to the PDA, a PCMCIA/CF-based VGA camera is required. However, in order to increase the video performance of the system, the PCMCIA bus bottleneck and the computational performance limitations of the PDA must be addressed. This is achieved by placing the video encoder on the camera side of the PCMCIA interface. According to the image sequence, the encoded video only needs less than one tenth of the uncompressed video stream data rate, and the video stream is encoded on the camera side of the interface bus, allowing larger video images to be transmitted to the PDA while also transmitting more Video frame.
Encoding requires partial decoding capability, so video encoding is computationally larger than decoding. So develop an encoder for the VGA camera with configurable processing techniques. This new video architecture is also suitable for next-generation products that add wireless capabilities. Video decoding is still in the software of the PDA.
With this new structure, you can get:
* CIF resolution image of 30 frames per second
* VGA images larger than 20 frames per second The current design approach is to find the processor that best meets the task requirements, but with configurable processors such as Altera's Nios or Tensilica's Xtensa, the processor can be customized to specific tasks. The design of a configurable processor has the following steps: First, the initial software and hardware configuration is evaluated using the APEX 20KE FPGA Development System or Instruction Set Simulator, and performance bottlenecks are determined by analyzing the results. The method of solving the current biggest bottleneck with hardware, software or a combination of the two has been theoretically adopted and is being continuously improved. The solution has been implemented (parameterized instructions, processor configuration changes, coprocessors, new systems, etc.) and the results are being evaluated. The results of the assessment should confirm the improvement in performance. Evaluation; propose a solution; pass the further evaluation verification scheme, so that the hardware/software solution meets the performance requirements, as shown in Figure 1. After this process, there may be some unavoidable bottlenecks, but it will gradually reach the best point.
The configurable processor scheme must have the following conditions:
* The processor has a parameterized instruction set.
* The processor has variable parts such as buffer size.
* The processor has an external or specific coprocessor.
* The processor can run under multiprocessor conditions.
* A combination of the above conditions.
With parameterized instruction processing, developers can use the basic processor configuration to evaluate the code and find bottlenecks. A common bottleneck is the lack of caching. With configurable processors, developers can roll back and reconfigure processors, provide more instruction or data buffering, or combine 2-way to 3-way to 4-way instructions. Another type of bottleneck is that too narrow a data channel limits the coding efficiency of one or a large number of pixel blocks. With a configurable processor, you can widen the data channel and process the entire row of pixels at a time, saving processor cycles. Specific instructions can be created to take advantage of this wider data channel. Taking the MPEG4 operation as an example, the calculation of the sum of absolute differences (SAD) can be customized so that 16 independent 8-bit pixel summations can be replaced with a 128-bit simultaneous summation instruction of all 16 pixel values.
Instructions can also be customized for discrete cosine transform (DCT), but it may be better to use a dedicated coprocessor. With a dedicated coprocessor, the pipelined DCT coprocessor is capable of doing the job. Software DCT can easily take up 15% to 20% of the processor cycle. If the processor does not have the required bandwidth, the DCT software can be replaced with hardware at 60 to 80 MHz.
A multiprocessor design is similar to using a dedicated coprocessor. Video encoding and decoding have serial characteristics because each frame is quantized or analogized. The DCT or inverse DCT (iDCT) sequence processes each frame so that one frame can be transferred from one processor to the next, each performing a specific function. Therefore, the entire frame rate is the frame rate of the slowest processor. With this approach, the initial startup of the processor pipeline is delayed, and the original encoding/decoding software needs to redesign the fabric for parallel operation.
Preliminary results from the above design indicate a significant improvement in the processing cycle, but further optimization (distributed digital DCT, redesigned architecture, reduced intermediate storage, etc.) is required.
The current results are summarized in Tables 1 to 4.
Once the final design is finalized, you can migrate your solution to an ASIC or convert from an FPGA design to an Altera HardCopy device to reduce costs.
This article briefly discusses how to use a configurable media processor to solve the problem of re-provisioning existing PDAs with multimedia features. The key to this is hardware/software co-design. When developing solutions to eliminate performance bottlenecks, designers need to make trade-offs between possible hardware and software solutions.

Hot Air Solder Level is a kind of surface treatment. this surface treatment can protect the PCB Board surface and facilitate PCB Assembly. Its advantage is that it is easy to weld, but the disadvantage is that the surface is relatively poor in smoothness. 

According to whether it contains lead or not, HASL can be divided into two types: Hot Air Solder Level with lead and Lead-free hot Air Solder Level.
Hot Air Solder Level

Hot Air Solder Level

Hot Air Solder Leveling,Hot Air Rework Station,Hot Air Soldering Station,Hot Air Soldering

Orilind Limited Company , http://www.orilind.com