Texas Instruments TMS320 DSP User Manual Page 19

  • Download
  • Add to my manuals
  • Print
  • Page
    / 88
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 18
www.ti.com
2.3 Data Memory
Data Memory
void PRE_filter1(int input[], int length, int *z)
{
int I, tmp;
for (I = 0; I
< length; I++) {
tmp = input[i] - z[0] + (13 * z[1] + 16) / 32;
z[1] = z[0];
z[0] = input[i];
input[i] = tmp;
}
}
This technique of replacing references to global data with references to parameters illustrates a general
technique that can be used to make virtually any Code reentrant. One simply defines a "state object" as
one that contains all of the state necessary for the algorithm; a pointer to this state is passed to the
algorithm (along with the input and output data).
typedef struct
PRE_Obj { /* state obj for pre-emphasis alg */
int z0;
int z1;
} PRE_Obj;
void
PRE_filter2(PRE_Obj *pre, int input[], int length)
{
int I, tmp;
for (I = 0; I < length; I++)
{
tmp = input[i] - pre->z0 + (13 * pre->z1 + 16) /
32;
pre->z1 = pre->z0;
pre->z0 = input[i];
input[i] = tmp;
}
}
Although the C Code looks more complicated than our original implementation, its performance is
comparable, it is fully reentrant, and its performance can be configured on a "per data object" basis. Since
each state object can be placed in any data memory, it is possible to place some objects in on-chip
memory and others in external memory. The pointer to the state object is, in effect, the function's private
"data page pointer." All of the function's data can be efficiently accessed by a constant offset from this
pointer.
Notice that while performance is comparable to our original implementation, it is slightly larger and slower
because of the state object redirection. Directly referencing global data is often more efficient than
referencing data via an address register. On the other hand, the decrease in efficiency can usually be
factored out of the time-critical loop and into the loop-setup Code. Thus, the incremental performance cost
is minimal and the benefit is that this same Code can be used in virtually any system—independent of
whether the system must support a single channel or multiple channels, or whether it is preemptive or
non-preemptive.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root
of all evil." —Donald Knuth "Structured Programming with go to Statements," Computing Surveys, Vol. 6,
No. 4, December, 1974, page 268.
The large performance difference between on-chip data memory and off-chip memory (even 0 wait-state
SRAM) is so large that every algorithm vendor designs their Code to operate as much as possible within
the on-chip memory. Since the performance gap is expected to increase dramatically in the next 3-5
years, this trend will continue for the foreseeable future. The TMS320C6000 series, for example, incurs a
25 wait state penalty for external SDRAM data memory access. Future processors may see this penalty
increase to 80 or even 100 wait states!
SPRU352G June 2005 Revised February 2007 General Programming Guidelines 19
Submit Documentation Feedback
Page view 18
1 2 ... 14 15 16 17 18 19 20 21 22 23 24 ... 87 88

Comments to this Manuals

No comments