To register a signal handler in linux:
signal(SIGINT, signalhandler);
The signal handler will run in the main thread, main thread will be interrupted.
2011/03/29
Linux Library
There are two kinds of libs in Linux:
Static Library & Shared Library
code from Static Library is actually included in final executable.
code from Shared Library is not included in final executable.
For Shared Library, we can use it in two ways:
1. explicit linking (during compile time, designate which lib to link)
2. dynamic loading (during run time, good for plugin)
Static Library & Shared Library
code from Static Library is actually included in final executable.
code from Shared Library is not included in final executable.
For Shared Library, we can use it in two ways:
1. explicit linking (during compile time, designate which lib to link)
2. dynamic loading (during run time, good for plugin)
Memory Band Width Problem
1) Noise happens on camera preview window.
This problem is solved by using src rotation instead of display rotation. display rotation needs more time for memory access.
2) Noise happens on screen when we take multi-shot
This problem is solved by raising the clock.
3) Noise happens on the last preview when take multi-shot
It can be solved by raising the clock but finally we found that it is a bug of incorrect panel setting.
This problem is solved by using src rotation instead of display rotation. display rotation needs more time for memory access.
2) Noise happens on screen when we take multi-shot
This problem is solved by raising the clock.
3) Noise happens on the last preview when take multi-shot
It can be solved by raising the clock but finally we found that it is a bug of incorrect panel setting.
How Audio is Played on a certain Mobile Phone
The audio is composed of two parts: decoder and mixer.
* There can be mufti decoders active at same time,
* All the result of decoders goes in to mixer
* Input of decoder is encoded frames, out put of decoder is PCM
* input of mixer is connected to the output decoder, mixer mix all the streams to one stream.
aac -- aac decoder -- PCM \
amr -- amr decoder -- PCM - Mixer -- PCM
mp3 -- mp3 deocder -- PCM /
There is one DSP. aac/amr/mp3 decoder applet and mixer applet share the DSP resource.
A non-preemptive OS is controlling all the applets.
A DSP session example
AMR Decoder --> Volume Control --> FormatConversion --> Sample Rate Conversion --> Effects --> Audio Render.
* There can be mufti decoders active at same time,
* All the result of decoders goes in to mixer
* Input of decoder is encoded frames, out put of decoder is PCM
* input of mixer is connected to the output decoder, mixer mix all the streams to one stream.
aac -- aac decoder -- PCM \
amr -- amr decoder -- PCM - Mixer -- PCM
mp3 -- mp3 deocder -- PCM /
There is one DSP. aac/amr/mp3 decoder applet and mixer applet share the DSP resource.
A non-preemptive OS is controlling all the applets.
A DSP session example
AMR Decoder --> Volume Control --> FormatConversion --> Sample Rate Conversion --> Effects --> Audio Render.
How Barcode Reader Works
1. Set the camera in viewfinder mode
2. Copy the frame in the chip memory from camera which is in view finder mode. The data format is YUV420
2. Copy the frame in the chip memory from camera which is in view finder mode. The data format is YUV420
Audio Video Sync Knowledge
This is how AV sync is done
Audio time is get and set to video processing module.
video block compare the current video time and audio time and decide the should play video faster or slower.
How video can play video faster or slower
Video blocks decodes a frame and then display it on the screen, by extending or shortening the display time, video can be played faster or slower. Be noticed that decoding time can never be reduced and no frame is dropped.
AV mis-sync example
1. We consume too much time in video decoding process, so that even we display the decoded video frame at shortest time, video still can not catch up with audio.
This is how we spent too much time in video decoding. We align every byte of video ring-buffer before we decode one frame, and the video is 30fps, which will consume a lot of computation time.
2. A/V mis-sync happened during encoding. This problem happens because start of audio encoding is too much late than start of video encoding, And all the first audio/video encoded frame starts with the timestamp 0. So when we play the encoded video file, we can see the video is delayed.
3. A/V mis-sync happened during VT. The network only has max 48kbps bandwidth to transfer encoded video frames. But at the start of the video encoding, the bitrate at the first second burst into 56kbps( then drop to 48kbps as we set). This will the delay the transfer of the video data for a about 0.5( or 2?) secs. As in the coming time, all the 48kbps bandwidth is still fully occupied, the delayed part can never find a chance to find time to do a make up. So the delay happend at the beginning will continue stably... turn no worse or better.
Audio time is get and set to video processing module.
video block compare the current video time and audio time and decide the should play video faster or slower.
How video can play video faster or slower
Video blocks decodes a frame and then display it on the screen, by extending or shortening the display time, video can be played faster or slower. Be noticed that decoding time can never be reduced and no frame is dropped.
AV mis-sync example
1. We consume too much time in video decoding process, so that even we display the decoded video frame at shortest time, video still can not catch up with audio.
This is how we spent too much time in video decoding. We align every byte of video ring-buffer before we decode one frame, and the video is 30fps, which will consume a lot of computation time.
2. A/V mis-sync happened during encoding. This problem happens because start of audio encoding is too much late than start of video encoding, And all the first audio/video encoded frame starts with the timestamp 0. So when we play the encoded video file, we can see the video is delayed.
3. A/V mis-sync happened during VT. The network only has max 48kbps bandwidth to transfer encoded video frames. But at the start of the video encoding, the bitrate at the first second burst into 56kbps( then drop to 48kbps as we set). This will the delay the transfer of the video data for a about 0.5( or 2?) secs. As in the coming time, all the 48kbps bandwidth is still fully occupied, the delayed part can never find a chance to find time to do a make up. So the delay happend at the beginning will continue stably... turn no worse or better.
Companion Chip Memory Access Mode
Direct Mode: Memory can be accessed as host system memory, took more space on the board.
InDirect Mode: Memory cannot be accessed as host system memory.
InDirect Mode: Memory cannot be accessed as host system memory.
Data Abortion happens to companion chip
When the companion video chip is powered off, register access of the chip will cause Data Abortion.
DCT & Fourier Transform
The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain .
From Book: "Introduction to digital audio coding and standards"
The Fourier Transform is the basic tool for converting a signal from its representation in time x(t) into a corresponding representation in frequency X(f).
For sin, cos signal, Notice that the Fourier Transform has components only at frequencies equal to positive and negative f0. This is a general property of periodic functions.
Notice that great deal of data reduction associated with representing this signal, as opposed to having to store its value at every point in time x(t).
Few words of my understanding:
At a period of time, instead of record the value at every time point, we record thefrequency of waves that composite the signal. According to the frequency information, we can calculate the value of signal at a certain time of point.
Mapping time to frequency is in fact a trade of memory space and calculation time.
From Book: "Introduction to digital audio coding and standards"
The Fourier Transform is the basic tool for converting a signal from its representation in time x(t) into a corresponding representation in frequency X(f).
For sin, cos signal, Notice that the Fourier Transform has components only at frequencies equal to positive and negative f0. This is a general property of periodic functions.
Notice that great deal of data reduction associated with representing this signal, as opposed to having to store its value at every point in time x(t).
Few words of my understanding:
At a period of time, instead of record the value at every time point, we record thefrequency of waves that composite the signal. According to the frequency information, we can calculate the value of signal at a certain time of point.
Mapping time to frequency is in fact a trade of memory space and calculation time.
Get Rounding of A of B
#define DIVIDE_ROUNDUP(a,b) (((a)+((b)-1))/(b))
// return a/b rounding up to the next digital
#define ROUNDUP(a,b) (ROUNDUP_DIVIDE(a,b)*(b))
// return rounding of a to the closest multiple of b
// return a/b rounding up to the next digital
#define ROUNDUP(a,b) (ROUNDUP_DIVIDE(a,b)*(b))
// return rounding of a to the closest multiple of b
2011/03/21
ARM Address Access Problem
Convert odd size frame from NV12 format to YUV420 format.
Two problem happens
1. if odd size, last row or last col Y data does not have UV data to match
2. ldrh should start from EVEN address, otherwise data abortion will happen.
--------------
There problem happens in this line.
udata = *( uvPlaneData + ( y / 2 )* uvStride + ( x / 2 ) );
notice that "udata" and "uvPlaneData" are of type uint_16.
The assembly code generated for above code are:
...
mla r11,r3,r4,r7
add r5,r6,r5,ls1 #0x1
bic r6,r1,#0x1
ldrh r6,[r5,+r6]
...
Data abortion happens at first round of the FOR loop.
when x is 0 and y is 0, r5 stores the value of uvPlaneData, the start address of uvPlane.
r6 is 0 "ldrh r6,[r5,+r6]" loads the data at the start address of uvPlane to r6(udata).
When "ldrh r6,[r5,+r6]" executes, the data abortion happened!
Here is the reason:
1. ldrh stands for "half word load", and it should start at EVEN address.
2. when the frame is odd size, uvPlane starts at ODD address
So I changed the some data type from uint_16 to uint_8 to avoid generation of "ldrh" asm code.
Two problem happens
1. if odd size, last row or last col Y data does not have UV data to match
2. ldrh should start from EVEN address, otherwise data abortion will happen.
--------------
There problem happens in this line.
udata = *( uvPlaneData + ( y / 2 )* uvStride + ( x / 2 ) );
notice that "udata" and "uvPlaneData" are of type uint_16.
The assembly code generated for above code are:
...
mla r11,r3,r4,r7
add r5,r6,r5,ls1 #0x1
bic r6,r1,#0x1
ldrh r6,[r5,+r6]
...
Data abortion happens at first round of the FOR loop.
when x is 0 and y is 0, r5 stores the value of uvPlaneData, the start address of uvPlane.
r6 is 0 "ldrh r6,[r5,+r6]" loads the data at the start address of uvPlane to r6(udata).
When "ldrh r6,[r5,+r6]" executes, the data abortion happened!
Here is the reason:
1. ldrh stands for "half word load", and it should start at EVEN address.
2. when the frame is odd size, uvPlane starts at ODD address
So I changed the some data type from uint_16 to uint_8 to avoid generation of "ldrh" asm code.
Audio Video Sync Problem
This is how AV sync is done:
Audio time is get and set to video processing module.
video block compare the current video time and audio time and decide the should play video faster or slower.
How video can play video faster or slower?
Video blocks decodes a frame and then display it on the screen, by extending or shortening the display time, video can be played faster or slower.
Please be noticed that decoding time can never be reduced and no frame is dropped.
AV mis-sync example:
1.
We consume too much time in video decoding process, so that even we display the decoded video frame at shortest time, video still can not catch up with audio.
This is how we spent too much time in video decoding. We align every byte of video ring-buffer before we decode one frame, and the video is 30fps, which will consume a lot of computation time.
2.
A/V mis-sync happened during encoding. This problem happens because start of audio encoding is too much late than start of video encoding, And all the first audio/video encoded frame starts with the timestamp 0. So when we play the encoded video file, we can see the video is delayed.
3.
A/V mis-sync happened during VT. The network only has max 48kbps bandwidth to transfer encoded video frames. But at the start of the video encoding, the bitrate at the first second burst into 56kbps( then drop to 48kbps as we set). This will the delay the transfer of the video data for a about 0.5( or 2?) secs. As in the coming time, all the 48kbps bandwidth is still fully occupied, the delayed part can never find a chance to find time to do a make up. So the delay happend at the beginning will continue stably… turn no worse or better.
Audio time is get and set to video processing module.
video block compare the current video time and audio time and decide the should play video faster or slower.
How video can play video faster or slower?
Video blocks decodes a frame and then display it on the screen, by extending or shortening the display time, video can be played faster or slower.
Please be noticed that decoding time can never be reduced and no frame is dropped.
AV mis-sync example:
1.
We consume too much time in video decoding process, so that even we display the decoded video frame at shortest time, video still can not catch up with audio.
This is how we spent too much time in video decoding. We align every byte of video ring-buffer before we decode one frame, and the video is 30fps, which will consume a lot of computation time.
2.
A/V mis-sync happened during encoding. This problem happens because start of audio encoding is too much late than start of video encoding, And all the first audio/video encoded frame starts with the timestamp 0. So when we play the encoded video file, we can see the video is delayed.
3.
A/V mis-sync happened during VT. The network only has max 48kbps bandwidth to transfer encoded video frames. But at the start of the video encoding, the bitrate at the first second burst into 56kbps( then drop to 48kbps as we set). This will the delay the transfer of the video data for a about 0.5( or 2?) secs. As in the coming time, all the 48kbps bandwidth is still fully occupied, the delayed part can never find a chance to find time to do a make up. So the delay happend at the beginning will continue stably… turn no worse or better.
Subscribe to:
Posts (Atom)
Post Code on Blogger
Simplest way to post code to blogger for me: <pre style="background: #f0f0f0; border: 1px dashed #CCCCCC; color: black;overflow-x:...
-
Explain There is not interrupt PIN for PCIe interrupt. When device wants to raise an interrupt, an interrupt message is sent to host via ...
-
Configure Space Addressing One of the major improvements the PCI Local Bus had over other I/O architectures was its configuration mechanism...
-
What is LMA and VMA Every loadable or allocatable output section has two addresses. The first is the VMA, or virtual memory address. This ...