Modified sampling in the Burst method
Modified sampling in the Burst method
In processing-based systems implemented on FPGA chips, the two most common methods are Burst and Stream processing. Most designers prefer to use the Burst method, and here I will briefly discuss its advantages and disadvantage.
Advantages:
1- Require less hardware resources, incl
a. Memory
b. Multipliers
c. Logic Cells
2- Design with smaller chips
3- Lower price
a. The required chip is smaller and cheaper
4- Less energy losses
Disadvantages:
1- Slow processing speed
a. Use sequential processing method
b. Use of available resources in a shared way
2- Complexity of timings
a. More complex coding and complicated simulation
b. Slow processing challenges
3- The possibility of causing fractures in the input samples
a. Signal fractures error
b. Signal energy scattering
Employers often seek to reduce the cost of the final product, so regardless of the benefits of Stream design, they seek what they want. Another advantage of implementing the Burst method is that the chip is smaller, thus reducing the price of the final product.
Samples of the input signal corresponding to a specific frequency that is calculated according to the application and the input frequency band are stored in a input memory. This frequency will definitely be much lower than our DSP frequency. This frequency difference causes the processor to wait for acquire samples during sampling times. To make the best performance of DSP speed, it is necessary to execute the overall architecture of the system in such a way that the DSP does not have a chance to rest, so the samples must be stored without interruption and the DSP in its processing cycle, whenever it needed new samples for it be available.
The easiest way to avoid missing input samples is to use FIFO or Dual Port RAM.
This method is usually effective, but it takes a lot of resources. Because as much as faster the processing cycle and the slower the sampling cycles, we need larger samples memory. If we bring the processing cycle and the sampling cycle closer together, in the best case the input data memory should be twice the sampling time.
Recommended by LinkedIn
Why?
Because when we use a 1024-cell memory for example, in order to add only 8 memory cells to it, we need to add 1 bit for addressing, which means that our memory module needs to be 2048 units, in System Design 8 cells of memory was added to our module, but 1016 cells of this module remain unused.
To solve this problem, I used a separate 8-cells memory I named it as SHADOW MEMORY, to store new samples in it when the processor takes the memory of the samples to perform the calculations. Until the processor fetches all samples of input memory, I give 8 new samples to the end of the Sample memory and copy them to the old cells, but this time at a speed of a thousand times the sampling frequency, I move the memory pointer reading by the processor to the address of 1016. Then processor accesses to the memory via DMA, it will receive 8 new samples first, and then popping up 1015 older samples.
The following images show the simulation result of data processing, which includes processing error due to sampling fractions, as well as correcting it by adding a small SHADOW RAM instead of Dual Port RAM.
Fractures and loss of input samples
Computation error in ABS (FFT) due to fracture in sampling
The HSA shadow memory and apply the Welch window to input samples to DSP
The XDO signal is the input of the processor
Transfer the contents of the shadow memory to the data memory@ 120 MHz
The CKRS signal: sampling frequency @107KHz
FFT signal: processing result without fracture error
AD signal: input samples
RDW signal: Welch window
RDS signal: Input memory of samples to CPU
AS signal: CPU address pointer
EW Signal: The memory address of the samples
EOC Signal: The signal conversion end flag