RTEMS CPU Kit with SuperCore
score/cpu/epiphany/rtems/score/cpu.h
Go to the documentation of this file.
1 
5 /*
6  *
7  * Copyright (c) 2015 University of York.
8  * Hesham ALMatary <hmka501@york.ac.uk>
9  *
10  * COPYRIGHT (c) 1989-1999.
11  * On-Line Applications Research Corporation (OAR).
12  *
13  * Redistribution and use in source and binary forms, with or without
14  * modification, are permitted provided that the following conditions
15  * are met:
16  * 1. Redistributions of source code must retain the above copyright
17  * notice, this list of conditions and the following disclaimer.
18  * 2. Redistributions in binary form must reproduce the above copyright
19  * notice, this list of conditions and the following disclaimer in the
20  * documentation and/or other materials provided with the distribution.
21  *
22  * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
23  * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
24  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
25  * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
26  * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
27  * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
28  * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
29  * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
30  * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
31  * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
32  * SUCH DAMAGE.
33  */
34 
35 #ifndef _EPIPHANY_CPU_H
36 #define _EPIPHANY_CPU_H
37 
38 #ifdef __cplusplus
39 extern "C" {
40 #endif
41 
42 #include <rtems/score/epiphany.h> /* pick up machine definitions */
43 #include <rtems/score/types.h>
44 #ifndef ASM
45 #include <rtems/bspIo.h>
46 #include <stdint.h>
47 #include <stdio.h> /* for printk */
48 #endif
49 
50 /* conditional compilation parameters */
51 
52 /*
53  * Should the calls to _Thread_Enable_dispatch be inlined?
54  *
55  * If TRUE, then they are inlined.
56  * If FALSE, then a subroutine call is made.
57  *
58  * Basically this is an example of the classic trade-off of size
59  * versus speed. Inlining the call (TRUE) typically increases the
60  * size of RTEMS while speeding up the enabling of dispatching.
61  * [NOTE: In general, the _Thread_Dispatch_disable_level will
62  * only be 0 or 1 unless you are in an interrupt handler and that
63  * interrupt handler invokes the executive.] When not inlined
64  * something calls _Thread_Enable_dispatch which in turns calls
65  * _Thread_Dispatch. If the enable dispatch is inlined, then
66  * one subroutine call is avoided entirely.]
67  *
68  */
69 
70 #define CPU_INLINE_ENABLE_DISPATCH FALSE
71 
72 /*
73  * Should the body of the search loops in _Thread_queue_Enqueue_priority
74  * be unrolled one time? In unrolled each iteration of the loop examines
75  * two "nodes" on the chain being searched. Otherwise, only one node
76  * is examined per iteration.
77  *
78  * If TRUE, then the loops are unrolled.
79  * If FALSE, then the loops are not unrolled.
80  *
81  * The primary factor in making this decision is the cost of disabling
82  * and enabling interrupts (_ISR_Flash) versus the cost of rest of the
83  * body of the loop. On some CPUs, the flash is more expensive than
84  * one iteration of the loop body. In this case, it might be desirable
85  * to unroll the loop. It is important to note that on some CPUs, this
86  * code is the longest interrupt disable period in RTEMS. So it is
87  * necessary to strike a balance when setting this parameter.
88  *
89  */
90 
91 #define CPU_UNROLL_ENQUEUE_PRIORITY TRUE
92 
93 /*
94  * Does RTEMS manage a dedicated interrupt stack in software?
95  *
96  * If TRUE, then a stack is allocated in _ISR_Handler_initialization.
97  * If FALSE, nothing is done.
98  *
99  * If the CPU supports a dedicated interrupt stack in hardware,
100  * then it is generally the responsibility of the BSP to allocate it
101  * and set it up.
102  *
103  * If the CPU does not support a dedicated interrupt stack, then
104  * the porter has two options: (1) execute interrupts on the
105  * stack of the interrupted task, and (2) have RTEMS manage a dedicated
106  * interrupt stack.
107  *
108  * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE.
109  *
110  * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and
111  * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is
112  * possible that both are FALSE for a particular CPU. Although it
113  * is unclear what that would imply about the interrupt processing
114  * procedure on that CPU.
115  *
116  * Currently, for epiphany port, _ISR_Handler is responsible for switching to
117  * RTEMS dedicated interrupt task.
118  *
119  */
120 
121 #define CPU_HAS_SOFTWARE_INTERRUPT_STACK TRUE
122 
123 /*
124  * Does this CPU have hardware support for a dedicated interrupt stack?
125  *
126  * If TRUE, then it must be installed during initialization.
127  * If FALSE, then no installation is performed.
128  *
129  * If this is TRUE, CPU_ALLOCATE_INTERRUPT_STACK should also be TRUE.
130  *
131  * Only one of CPU_HAS_SOFTWARE_INTERRUPT_STACK and
132  * CPU_HAS_HARDWARE_INTERRUPT_STACK should be set to TRUE. It is
133  * possible that both are FALSE for a particular CPU. Although it
134  * is unclear what that would imply about the interrupt processing
135  * procedure on that CPU.
136  *
137  */
138 
139 #define CPU_HAS_HARDWARE_INTERRUPT_STACK FALSE
140 
141 /*
142  * Does RTEMS allocate a dedicated interrupt stack in the Interrupt Manager?
143  *
144  * If TRUE, then the memory is allocated during initialization.
145  * If FALSE, then the memory is allocated during initialization.
146  *
147  * This should be TRUE is CPU_HAS_SOFTWARE_INTERRUPT_STACK is TRUE
148  * or CPU_INSTALL_HARDWARE_INTERRUPT_STACK is TRUE.
149  *
150  */
151 
152 #define CPU_ALLOCATE_INTERRUPT_STACK TRUE
153 
154 /*
155  * Does the RTEMS invoke the user's ISR with the vector number and
156  * a pointer to the saved interrupt frame (1) or just the vector
157  * number (0)?
158  *
159  */
160 
161 #define CPU_ISR_PASSES_FRAME_POINTER 1
162 
163 /*
164  * Does the CPU have hardware floating point?
165  *
166  * If TRUE, then the RTEMS_FLOATING_POINT task attribute is supported.
167  * If FALSE, then the RTEMS_FLOATING_POINT task attribute is ignored.
168  *
169  * If there is a FP coprocessor such as the i387 or mc68881, then
170  * the answer is TRUE.
171  *
172  * The macro name "epiphany_HAS_FPU" should be made CPU specific.
173  * It indicates whether or not this CPU model has FP support. For
174  * example, it would be possible to have an i386_nofp CPU model
175  * which set this to false to indicate that you have an i386 without
176  * an i387 and wish to leave floating point support out of RTEMS.
177  *
178  * The CPU_SOFTWARE_FP is used to indicate whether or not there
179  * is software implemented floating point that must be context
180  * switched. The determination of whether or not this applies
181  * is very tool specific and the state saved/restored is also
182  * compiler specific.
183  *
184  * epiphany Specific Information:
185  *
186  * At this time there are no implementations of Epiphany that are
187  * expected to implement floating point.
188  */
189 
190 #define CPU_HARDWARE_FP FALSE
191 #define CPU_SOFTWARE_FP FALSE
192 
193 /*
194  * Are all tasks RTEMS_FLOATING_POINT tasks implicitly?
195  *
196  * If TRUE, then the RTEMS_FLOATING_POINT task attribute is assumed.
197  * If FALSE, then the RTEMS_FLOATING_POINT task attribute is followed.
198  *
199  * If CPU_HARDWARE_FP is FALSE, then this should be FALSE as well.
200  *
201  */
202 
203 #define CPU_ALL_TASKS_ARE_FP FALSE
204 
205 /*
206  * Should the IDLE task have a floating point context?
207  *
208  * If TRUE, then the IDLE task is created as a RTEMS_FLOATING_POINT task
209  * and it has a floating point context which is switched in and out.
210  * If FALSE, then the IDLE task does not have a floating point context.
211  *
212  * Setting this to TRUE negatively impacts the time required to preempt
213  * the IDLE task from an interrupt because the floating point context
214  * must be saved as part of the preemption.
215  *
216  */
217 
218 #define CPU_IDLE_TASK_IS_FP FALSE
219 
220 /*
221  * Should the saving of the floating point registers be deferred
222  * until a context switch is made to another different floating point
223  * task?
224  *
225  * If TRUE, then the floating point context will not be stored until
226  * necessary. It will remain in the floating point registers and not
227  * disturned until another floating point task is switched to.
228  *
229  * If FALSE, then the floating point context is saved when a floating
230  * point task is switched out and restored when the next floating point
231  * task is restored. The state of the floating point registers between
232  * those two operations is not specified.
233  *
234  * If the floating point context does NOT have to be saved as part of
235  * interrupt dispatching, then it should be safe to set this to TRUE.
236  *
237  * Setting this flag to TRUE results in using a different algorithm
238  * for deciding when to save and restore the floating point context.
239  * The deferred FP switch algorithm minimizes the number of times
240  * the FP context is saved and restored. The FP context is not saved
241  * until a context switch is made to another, different FP task.
242  * Thus in a system with only one FP task, the FP context will never
243  * be saved or restored.
244  *
245  */
246 
247 #define CPU_USE_DEFERRED_FP_SWITCH FALSE
248 
249 /*
250  * Does this port provide a CPU dependent IDLE task implementation?
251  *
252  * If TRUE, then the routine _CPU_Thread_Idle_body
253  * must be provided and is the default IDLE thread body instead of
254  * _CPU_Thread_Idle_body.
255  *
256  * If FALSE, then use the generic IDLE thread body if the BSP does
257  * not provide one.
258  *
259  * This is intended to allow for supporting processors which have
260  * a low power or idle mode. When the IDLE thread is executed, then
261  * the CPU can be powered down.
262  *
263  * The order of precedence for selecting the IDLE thread body is:
264  *
265  * 1. BSP provided
266  * 2. CPU dependent (if provided)
267  * 3. generic (if no BSP and no CPU dependent)
268  *
269  */
270 
271 #define CPU_PROVIDES_IDLE_THREAD_BODY TRUE
272 
273 /*
274  * Does the stack grow up (toward higher addresses) or down
275  * (toward lower addresses)?
276  *
277  * If TRUE, then the grows upward.
278  * If FALSE, then the grows toward smaller addresses.
279  *
280  */
281 
282 #define CPU_STACK_GROWS_UP FALSE
283 
284 /*
285  * The following is the variable attribute used to force alignment
286  * of critical RTEMS structures. On some processors it may make
287  * sense to have these aligned on tighter boundaries than
288  * the minimum requirements of the compiler in order to have as
289  * much of the critical data area as possible in a cache line.
290  *
291  * The placement of this macro in the declaration of the variables
292  * is based on the syntactically requirements of the GNU C
293  * "__attribute__" extension. For example with GNU C, use
294  * the following to force a structures to a 32 byte boundary.
295  *
296  * __attribute__ ((aligned (32)))
297  *
298  * NOTE: Currently only the Priority Bit Map table uses this feature.
299  * To benefit from using this, the data must be heavily
300  * used so it will stay in the cache and used frequently enough
301  * in the executive to justify turning this on.
302  *
303  */
304 
305 #define CPU_STRUCTURE_ALIGNMENT __attribute__ ((aligned (64)))
306 
307 /*
308  * Define what is required to specify how the network to host conversion
309  * routines are handled.
310  *
311  * epiphany Specific Information:
312  *
313  * This version of RTEMS is designed specifically to run with
314  * big endian architectures. If you want little endian, you'll
315  * have to make the appropriate adjustments here and write
316  * efficient routines for byte swapping. The epiphany architecture
317  * doesn't do this very well.
318  */
319 
320 #define CPU_HAS_OWN_HOST_TO_NETWORK_ROUTINES FALSE
321 #define CPU_BIG_ENDIAN FALSE
322 #define CPU_LITTLE_ENDIAN TRUE
323 
324 /*
325  * The following defines the number of bits actually used in the
326  * interrupt field of the task mode. How those bits map to the
327  * CPU interrupt levels is defined by the routine _CPU_ISR_Set_level().
328  *
329  */
330 
331 #define CPU_MODES_INTERRUPT_MASK 0x00000001
332 
333 /*
334  * Processor defined structures required for cpukit/score.
335  */
336 
337 /*
338  * Contexts
339  *
340  * Generally there are 2 types of context to save.
341  * 1. Interrupt registers to save
342  * 2. Task level registers to save
343  *
344  * This means we have the following 3 context items:
345  * 1. task level context stuff:: Context_Control
346  * 2. floating point task stuff:: Context_Control_fp
347  * 3. special interrupt level context :: Context_Control_interrupt
348  *
349  * On some processors, it is cost-effective to save only the callee
350  * preserved registers during a task context switch. This means
351  * that the ISR code needs to save those registers which do not
352  * persist across function calls. It is not mandatory to make this
353  * distinctions between the caller/callee saves registers for the
354  * purpose of minimizing context saved during task switch and on interrupts.
355  * If the cost of saving extra registers is minimal, simplicity is the
356  * choice. Save the same context on interrupt entry as for tasks in
357  * this case.
358  *
359  * Additionally, if gdb is to be made aware of RTEMS tasks for this CPU, then
360  * care should be used in designing the context area.
361  *
362  * On some CPUs with hardware floating point support, the Context_Control_fp
363  * structure will not be used or it simply consist of an array of a
364  * fixed number of bytes. This is done when the floating point context
365  * is dumped by a "FP save context" type instruction and the format
366  * is not really defined by the CPU. In this case, there is no need
367  * to figure out the exact format -- only the size. Of course, although
368  * this is enough information for RTEMS, it is probably not enough for
369  * a debugger such as gdb. But that is another problem.
370  *
371  *
372  */
373 #ifndef ASM
374 
375 typedef struct {
376  uint32_t r[64];
377 
378  uint32_t status;
379  uint32_t config;
380  uint32_t iret;
381 
382 #ifdef RTEMS_SMP
383 
420  volatile bool is_executing;
421 #endif
423 
424 #define _CPU_Context_Get_SP( _context ) \
425  (_context)->r[13]
426 
427 typedef struct {
429  double some_float_register;
431 
432 typedef Context_Control CPU_Interrupt_frame;
433 
434 /*
435  * The size of the floating point context area. On some CPUs this
436  * will not be a "sizeof" because the format of the floating point
437  * area is not defined -- only the size is. This is usually on
438  * CPUs with a "floating point save context" instruction.
439  *
440  * epiphany Specific Information:
441  *
442  */
443 
444 #define CPU_CONTEXT_FP_SIZE 0
445 SCORE_EXTERN Context_Control_fp _CPU_Null_fp_context;
446 
447 /*
448  * Amount of extra stack (above minimum stack size) required by
449  * MPCI receive server thread. Remember that in a multiprocessor
450  * system this thread must exist and be able to process all directives.
451  *
452  */
453 
454 #define CPU_MPCI_RECEIVE_SERVER_EXTRA_STACK 0
455 
456 /*
457  * Should be large enough to run all RTEMS tests. This insures
458  * that a "reasonable" small application should not have any problems.
459  *
460  */
461 
462 #define CPU_STACK_MINIMUM_SIZE 4096
463 
464 /*
465  * CPU's worst alignment requirement for data types on a byte boundary. This
466  * alignment does not take into account the requirements for the stack.
467  *
468  */
469 
470 #define CPU_ALIGNMENT 8
471 
472 /*
473  * This is defined if the port has a special way to report the ISR nesting
474  * level. Most ports maintain the variable _ISR_Nest_level.
475  */
476 #define CPU_PROVIDES_ISR_IS_IN_PROGRESS FALSE
477 
478 /*
479  * This number corresponds to the byte alignment requirement for the
480  * heap handler. This alignment requirement may be stricter than that
481  * for the data types alignment specified by CPU_ALIGNMENT. It is
482  * common for the heap to follow the same alignment requirement as
483  * CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict enough for the heap,
484  * then this should be set to CPU_ALIGNMENT.
485  *
486  * NOTE: This does not have to be a power of 2 although it should be
487  * a multiple of 2 greater than or equal to 2. The requirement
488  * to be a multiple of 2 is because the heap uses the least
489  * significant field of the front and back flags to indicate
490  * that a block is in use or free. So you do not want any odd
491  * length blocks really putting length data in that bit.
492  *
493  * On byte oriented architectures, CPU_HEAP_ALIGNMENT normally will
494  * have to be greater or equal to than CPU_ALIGNMENT to ensure that
495  * elements allocated from the heap meet all restrictions.
496  *
497  */
498 
499 #define CPU_HEAP_ALIGNMENT CPU_ALIGNMENT
500 
501 /*
502  * This number corresponds to the byte alignment requirement for memory
503  * buffers allocated by the partition manager. This alignment requirement
504  * may be stricter than that for the data types alignment specified by
505  * CPU_ALIGNMENT. It is common for the partition to follow the same
506  * alignment requirement as CPU_ALIGNMENT. If the CPU_ALIGNMENT is strict
507  * enough for the partition, then this should be set to CPU_ALIGNMENT.
508  *
509  * NOTE: This does not have to be a power of 2. It does have to
510  * be greater or equal to than CPU_ALIGNMENT.
511  *
512  */
513 
514 #define CPU_PARTITION_ALIGNMENT CPU_ALIGNMENT
515 
516 /*
517  * This number corresponds to the byte alignment requirement for the
518  * stack. This alignment requirement may be stricter than that for the
519  * data types alignment specified by CPU_ALIGNMENT. If the CPU_ALIGNMENT
520  * is strict enough for the stack, then this should be set to 0.
521  *
522  * NOTE: This must be a power of 2 either 0 or greater than CPU_ALIGNMENT.
523  *
524  */
525 
526 #define CPU_STACK_ALIGNMENT 8
527 
528 /* ISR handler macros */
529 
530 /*
531  * Support routine to initialize the RTEMS vector table after it is allocated.
532  *
533  * NO_CPU Specific Information:
534  *
535  * XXX document implementation including references if appropriate
536  */
537 
538 #define _CPU_Initialize_vectors()
539 
540 /*
541  * Disable all interrupts for an RTEMS critical section. The previous
542  * level is returned in _level.
543  *
544  */
545 
546 static inline uint32_t epiphany_interrupt_disable( void )
547 {
548  uint32_t sr;
549  __asm__ __volatile__ ("movfs %[sr], status \n" : [sr] "=r" (sr):);
550  __asm__ __volatile__("gid \n");
551  return sr;
552 }
553 
554 static inline void epiphany_interrupt_enable(uint32_t level)
555 {
556  __asm__ __volatile__("gie \n");
557  __asm__ __volatile__ ("movts status, %[level] \n" :: [level] "r" (level):);
558 }
559 
560 #define _CPU_ISR_Disable( _level ) \
561  _level = epiphany_interrupt_disable()
562 
563 /*
564  * Enable interrupts to the previous level (returned by _CPU_ISR_Disable).
565  * This indicates the end of an RTEMS critical section. The parameter
566  * _level is not modified.
567  *
568  */
569 
570 #define _CPU_ISR_Enable( _level ) \
571  epiphany_interrupt_enable( _level )
572 
573 /*
574  * This temporarily restores the interrupt to _level before immediately
575  * disabling them again. This is used to divide long RTEMS critical
576  * sections into two or more parts. The parameter _level is not
577  * modified.
578  *
579  */
580 
581 #define _CPU_ISR_Flash( _level ) \
582  do{ \
583  if ( (_level & 0x2) != 0 ) \
584  _CPU_ISR_Enable( _level ); \
585  epiphany_interrupt_disable(); \
586  } while(0)
587 
588 /*
589  * Map interrupt level in task mode onto the hardware that the CPU
590  * actually provides. Currently, interrupt levels which do not
591  * map onto the CPU in a generic fashion are undefined. Someday,
592  * it would be nice if these were "mapped" by the application
593  * via a callout. For example, m68k has 8 levels 0 - 7, levels
594  * 8 - 255 would be available for bsp/application specific meaning.
595  * This could be used to manage a programmable interrupt controller
596  * via the rtems_task_mode directive.
597  *
598  * The get routine usually must be implemented as a subroutine.
599  *
600  */
601 
602 void _CPU_ISR_Set_level( uint32_t level );
603 
604 uint32_t _CPU_ISR_Get_level( void );
605 
606 /* end of ISR handler macros */
607 
608 /* Context handler macros */
609 
610 /*
611  * Initialize the context to a state suitable for starting a
612  * task after a context restore operation. Generally, this
613  * involves:
614  *
615  * - setting a starting address
616  * - preparing the stack
617  * - preparing the stack and frame pointers
618  * - setting the proper interrupt level in the context
619  * - initializing the floating point context
620  *
621  * This routine generally does not set any unnecessary register
622  * in the context. The state of the "general data" registers is
623  * undefined at task start time.
624  *
625  * NOTE: This is_fp parameter is TRUE if the thread is to be a floating
626  * point thread. This is typically only used on CPUs where the
627  * FPU may be easily disabled by software such as on the SPARC
628  * where the PSR contains an enable FPU bit.
629  *
630  */
631 
639 #define EPIPHANY_GCC_RED_ZONE_SIZE 128
640 
659  Context_Control *context,
660  void *stack_area_begin,
661  size_t stack_area_size,
662  uint32_t new_level,
663  void (*entry_point)( void ),
664  bool is_fp,
665  void *tls_area
666 );
667 
668 /*
669  * This routine is responsible for somehow restarting the currently
670  * executing task. If you are lucky, then all that is necessary
671  * is restoring the context. Otherwise, there will need to be
672  * a special assembly routine which does something special in this
673  * case. Context_Restore should work most of the time. It will
674  * not work if restarting self conflicts with the stack frame
675  * assumptions of restoring a context.
676  *
677  */
678 
679 #define _CPU_Context_Restart_self( _the_context ) \
680  _CPU_Context_restore( (_the_context) )
681 
682 /*
683  * The purpose of this macro is to allow the initial pointer into
684  * a floating point context area (used to save the floating point
685  * context) to be at an arbitrary place in the floating point
686  * context area.
687  *
688  * This is necessary because some FP units are designed to have
689  * their context saved as a stack which grows into lower addresses.
690  * Other FP units can be saved by simply moving registers into offsets
691  * from the base of the context area. Finally some FP units provide
692  * a "dump context" instruction which could fill in from high to low
693  * or low to high based on the whim of the CPU designers.
694  *
695  */
696 
697 #define _CPU_Context_Fp_start( _base, _offset ) \
698  ( (void *) _Addresses_Add_offset( (_base), (_offset) ) )
699 
700 /*
701  * This routine initializes the FP context area passed to it to.
702  * There are a few standard ways in which to initialize the
703  * floating point context. The code included for this macro assumes
704  * that this is a CPU in which a "initial" FP context was saved into
705  * _CPU_Null_fp_context and it simply copies it to the destination
706  * context passed to it.
707  *
708  * Other models include (1) not doing anything, and (2) putting
709  * a "null FP status word" in the correct place in the FP context.
710  *
711  */
712 
713 #define _CPU_Context_Initialize_fp( _destination ) \
714  { \
715  *(*(_destination)) = _CPU_Null_fp_context; \
716  }
717 
718 /* end of Context handler macros */
719 
720 /* Fatal Error manager macros */
721 
722 /*
723  * This routine copies _error into a known place -- typically a stack
724  * location or a register, optionally disables interrupts, and
725  * halts/stops the CPU.
726  *
727  */
728 
729 #define _CPU_Fatal_halt(_source, _error ) \
730  printk("Fatal Error %d.%d Halted\n",_source, _error); \
731  asm("trap 3" :: "r" (_error)); \
732  for(;;)
733 
734 /* end of Fatal Error manager macros */
735 
736 /* Bitfield handler macros */
737 
738 /*
739  * This routine sets _output to the bit number of the first bit
740  * set in _value. _value is of CPU dependent type Priority_Bit_map_control.
741  * This type may be either 16 or 32 bits wide although only the 16
742  * least significant bits will be used.
743  *
744  * There are a number of variables in using a "find first bit" type
745  * instruction.
746  *
747  * (1) What happens when run on a value of zero?
748  * (2) Bits may be numbered from MSB to LSB or vice-versa.
749  * (3) The numbering may be zero or one based.
750  * (4) The "find first bit" instruction may search from MSB or LSB.
751  *
752  * RTEMS guarantees that (1) will never happen so it is not a concern.
753  * (2),(3), (4) are handled by the macros _CPU_Priority_mask() and
754  * _CPU_Priority_bits_index(). These three form a set of routines
755  * which must logically operate together. Bits in the _value are
756  * set and cleared based on masks built by _CPU_Priority_mask().
757  * The basic major and minor values calculated by _Priority_Major()
758  * and _Priority_Minor() are "massaged" by _CPU_Priority_bits_index()
759  * to properly range between the values returned by the "find first bit"
760  * instruction. This makes it possible for _Priority_Get_highest() to
761  * calculate the major and directly index into the minor table.
762  * This mapping is necessary to ensure that 0 (a high priority major/minor)
763  * is the first bit found.
764  *
765  * This entire "find first bit" and mapping process depends heavily
766  * on the manner in which a priority is broken into a major and minor
767  * components with the major being the 4 MSB of a priority and minor
768  * the 4 LSB. Thus (0 << 4) + 0 corresponds to priority 0 -- the highest
769  * priority. And (15 << 4) + 14 corresponds to priority 254 -- the next
770  * to the lowest priority.
771  *
772  * If your CPU does not have a "find first bit" instruction, then
773  * there are ways to make do without it. Here are a handful of ways
774  * to implement this in software:
775  *
776  * - a series of 16 bit test instructions
777  * - a "binary search using if's"
778  * - _number = 0
779  * if _value > 0x00ff
780  * _value >>=8
781  * _number = 8;
782  *
783  * if _value > 0x0000f
784  * _value >=8
785  * _number += 4
786  *
787  * _number += bit_set_table[ _value ]
788  *
789  * where bit_set_table[ 16 ] has values which indicate the first
790  * bit set
791  *
792  */
793 
794  /* #define CPU_USE_GENERIC_BITFIELD_CODE FALSE */
795 #define CPU_USE_GENERIC_BITFIELD_CODE TRUE
796 #define CPU_USE_GENERIC_BITFIELD_DATA TRUE
797 
798 #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE)
799 
800 #define _CPU_Bitfield_Find_first_bit( _value, _output ) \
801  { \
802  (_output) = 0; /* do something to prevent warnings */ \
803  }
804 #endif
805 
806 /* end of Bitfield handler macros */
807 
808 /*
809  * This routine builds the mask which corresponds to the bit fields
810  * as searched by _CPU_Bitfield_Find_first_bit(). See the discussion
811  * for that routine.
812  *
813  */
814 
815 #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE)
816 
817 #define _CPU_Priority_Mask( _bit_number ) \
818  (1 << _bit_number)
819 
820 #endif
821 
822 /*
823  * This routine translates the bit numbers returned by
824  * _CPU_Bitfield_Find_first_bit() into something suitable for use as
825  * a major or minor component of a priority. See the discussion
826  * for that routine.
827  *
828  */
829 
830 #if (CPU_USE_GENERIC_BITFIELD_CODE == FALSE)
831 
832 #define _CPU_Priority_bits_index( _priority ) \
833  (_priority)
834 
835 #endif
836 
837 #define CPU_TIMESTAMP_USE_STRUCT_TIMESPEC FALSE
838 #define CPU_TIMESTAMP_USE_INT64 TRUE
839 #define CPU_TIMESTAMP_USE_INT64_INLINE FALSE
840 
841 #endif /* ASM */
842 
850 #define CPU_SIZEOF_POINTER 4
851 #define CPU_EXCEPTION_FRAME_SIZE 260
852 #define CPU_PER_CPU_CONTROL_SIZE 0
853 
854 #ifndef ASM
855 typedef uint16_t Priority_bit_map_Word;
856 
857 typedef struct {
858  uint32_t r[62];
859  uint32_t status;
860  uint32_t config;
861  uint32_t iret;
863 
870 
871 
872 /* end of Priority handler macros */
873 
874 /* functions */
875 
876 /*
877  * _CPU_Initialize
878  *
879  * This routine performs CPU dependent initialization.
880  *
881  */
882 
883 void _CPU_Initialize(
884  void
885 );
886 
887 /*
888  * _CPU_ISR_install_raw_handler
889  *
890  * This routine installs a "raw" interrupt handler directly into the
891  * processor's vector table.
892  *
893  */
894 
896  uint32_t vector,
897  proc_ptr new_handler,
898  proc_ptr *old_handler
899 );
900 
901 /*
902  * _CPU_ISR_install_vector
903  *
904  * This routine installs an interrupt vector.
905  *
906  * NO_CPU Specific Information:
907  *
908  * XXX document implementation including references if appropriate
909  */
910 
912  uint32_t vector,
913  proc_ptr new_handler,
914  proc_ptr *old_handler
915 );
916 
917 /*
918  * _CPU_Install_interrupt_stack
919  *
920  * This routine installs the hardware interrupt stack pointer.
921  *
922  * NOTE: It need only be provided if CPU_HAS_HARDWARE_INTERRUPT_STACK
923  * is TRUE.
924  *
925  */
926 
927 void _CPU_Install_interrupt_stack( void );
928 
929 /*
930  * _CPU_Thread_Idle_body
931  *
932  * This routine is the CPU dependent IDLE thread body.
933  *
934  * NOTE: It need only be provided if CPU_PROVIDES_IDLE_THREAD_BODY
935  * is TRUE.
936  *
937  */
938 
939 void *_CPU_Thread_Idle_body( uintptr_t ignored );
940 
941 /*
942  * _CPU_Context_switch
943  *
944  * This routine switches from the run context to the heir context.
945  *
946  * epiphany Specific Information:
947  *
948  * Please see the comments in the .c file for a description of how
949  * this function works. There are several things to be aware of.
950  */
951 
953  Context_Control *run,
954  Context_Control *heir
955 );
956 
957 /*
958  * _CPU_Context_restore
959  *
960  * This routine is generally used only to restart self in an
961  * efficient manner. It may simply be a label in _CPU_Context_switch.
962  *
963  * NOTE: May be unnecessary to reload some registers.
964  *
965  */
966 
968  Context_Control *new_context
970 
971 /*
972  * _CPU_Context_save_fp
973  *
974  * This routine saves the floating point context passed to it.
975  *
976  */
977 
979  void **fp_context_ptr
980 );
981 
982 /*
983  * _CPU_Context_restore_fp
984  *
985  * This routine restores the floating point context passed to it.
986  *
987  */
988 
990  void **fp_context_ptr
991 );
992 
993 /* The following routine swaps the endian format of an unsigned int.
994  * It must be static because it is referenced indirectly.
995  *
996  * This version will work on any processor, but if there is a better
997  * way for your CPU PLEASE use it. The most common way to do this is to:
998  *
999  * swap least significant two bytes with 16-bit rotate
1000  * swap upper and lower 16-bits
1001  * swap most significant two bytes with 16-bit rotate
1002  *
1003  * Some CPUs have special instructions which swap a 32-bit quantity in
1004  * a single instruction (e.g. i486). It is probably best to avoid
1005  * an "endian swapping control bit" in the CPU. One good reason is
1006  * that interrupts would probably have to be disabled to insure that
1007  * an interrupt does not try to access the same "chunk" with the wrong
1008  * endian. Another good reason is that on some CPUs, the endian bit
1009  * endianness for ALL fetches -- both code and data -- so the code
1010  * will be fetched incorrectly.
1011  *
1012  */
1013 
1014 static inline unsigned int CPU_swap_u32(
1015  unsigned int value
1016 )
1017 {
1018  uint32_t byte1, byte2, byte3, byte4, swapped;
1019 
1020  byte4 = (value >> 24) & 0xff;
1021  byte3 = (value >> 16) & 0xff;
1022  byte2 = (value >> 8) & 0xff;
1023  byte1 = value & 0xff;
1024 
1025  swapped = (byte1 << 24) | (byte2 << 16) | (byte3 << 8) | byte4;
1026  return( swapped );
1027 }
1028 
1029 #define CPU_swap_u16( value ) \
1030  (((value&0xff) << 8) | ((value >> 8)&0xff))
1031 
1032 static inline void _CPU_Context_volatile_clobber( uintptr_t pattern )
1033 {
1034  /* TODO */
1035 }
1036 
1037 static inline void _CPU_Context_validate( uintptr_t pattern )
1038 {
1039  while (1) {
1040  /* TODO */
1041  }
1042 }
1043 
1044 typedef uint32_t CPU_Counter_ticks;
1045 
1046 CPU_Counter_ticks _CPU_Counter_read( void );
1047 
1048 static inline CPU_Counter_ticks _CPU_Counter_difference(
1049  CPU_Counter_ticks second,
1050  CPU_Counter_ticks first
1051 )
1052 {
1053  return second - first;
1054 }
1055 
1056 #ifdef RTEMS_SMP
1057 
1070  uint32_t _CPU_SMP_Initialize( void );
1071 
1085  bool _CPU_SMP_Start_processor( uint32_t cpu_index );
1086 
1101  void _CPU_SMP_Finalize_initialization( uint32_t cpu_count );
1102 
1110  uint32_t _CPU_SMP_Get_current_processor( void );
1111 
1120  void _CPU_SMP_Send_interrupt( uint32_t target_processor_index );
1121 
1133  void _CPU_SMP_Processor_event_broadcast( void );
1134 
1143  static inline void _CPU_SMP_Processor_event_receive( void )
1144  {
1145  __asm__ volatile ( "" : : : "memory" );
1146  }
1147 
1153  static inline bool _CPU_Context_Get_is_executing(
1154  const Context_Control *context
1155  )
1156  {
1157  return context->is_executing;
1158  }
1159 
1166  static inline void _CPU_Context_Set_is_executing(
1167  Context_Control *context,
1168  bool is_executing
1169  )
1170  {
1171  context->is_executing = is_executing;
1172  }
1173 #endif /* RTEMS_SMP */
1174 
1175 #endif /* ASM */
1176 
1177 #ifdef __cplusplus
1178 }
1179 #endif
1180 
1181 #endif
void _CPU_ISR_install_vector(uint32_t vector, proc_ptr new_handler, proc_ptr *old_handler)
This routine installs an interrupt vector.
Definition: avr/cpu.c:69
void _CPU_Context_validate(uintptr_t pattern)
Initializes and validates the CPU context with values derived from the pattern parameter.
Definition: score/cpu/mips/rtems/score/cpu.h:1109
uint32_t _CPU_ISR_Get_level(void)
Return the current interrupt disable level for this task in the format used by the interrupt level po...
Definition: avr/cpu.c:39
void _CPU_Context_restore(Context_Control *new_context)
This routine is generally used only to restart self in an efficient manner.
Definition: no_cpu/cpu_asm.c:112
Definition: deflate.c:116
void _CPU_Context_switch(Context_Control *run, Context_Control *heir)
CPU switch context.
Definition: no_cpu/cpu_asm.c:92
void _CPU_Context_volatile_clobber(uintptr_t pattern)
Clobbers all volatile registers with values derived from the pattern parameter.
Definition: score/cpu/mips/rtems/score/cpu.h:1104
This defines the minimal set of integer and processor state registers that must be saved during a vol...
Definition: score/cpu/arm/rtems/score/cpu.h:248
void _CPU_Initialize(void)
CPU initialization.
Definition: avr/cpu.c:26
uint32_t CPU_Counter_ticks
Unsigned integer type for CPU counter values.
Definition: score/cpu/no_cpu/rtems/score/cpu.h:1461
void _CPU_ISR_Set_level(uint32_t level)
Sets the hardware interrupt level by the level value.
Definition: epiphany/cpu.c:62
register struct Per_CPU_Control *_SPARC_Per_CPU_current __asm__("g6")
The pointer to the current per-CPU control is available via register g6.
void _CPU_Context_restore_fp(Context_Control_fp **fp_context_ptr)
This routine restores the floating point context passed to it.
Definition: m68k/cpu.c:176
void _CPU_Install_interrupt_stack(void)
This routine installs the hardware interrupt stack pointer.
Definition: avr/cpu.c:101
CPU_Counter_ticks _CPU_Counter_difference(CPU_Counter_ticks second, CPU_Counter_ticks first)
Returns the difference between the second and first CPU counter value.
Definition: score/cpu/mips/rtems/score/cpu.h:1160
This defines the complete set of floating point registers that must be saved during any context switc...
Definition: score/cpu/arm/rtems/score/cpu.h:294
Interface to Kernel Print Methods.
void _CPU_Context_save_fp(Context_Control_fp **fp_context_ptr)
This routine saves the floating point context passed to it.
Definition: m68k/cpu.c:167
void _CPU_ISR_install_raw_handler(uint32_t vector, proc_ptr new_handler, proc_ptr *old_handler)
This routine installs a "raw" interrupt handler directly into the processor&#39;s vector table...
Definition: avr/cpu.c:57
#define RTEMS_COMPILER_NO_RETURN_ATTRIBUTE
The following macro is a compiler specific way to indicate that the method will NOT return to the cal...
Definition: basedefs.h:162
void _CPU_Exception_frame_print(const CPU_Exception_frame *frame)
Prints the exception frame via printk().
Definition: arm-exception-frame-print.c:46
void * _CPU_Thread_Idle_body(uintptr_t ignored)
This routine is the CPU dependent IDLE thread body.
Definition: avr/cpu.c:125
void _CPU_Context_Initialize(Context_Control *context, void *stack_area_begin, size_t stack_area_size, uint32_t new_level, void(*entry_point)(void), bool is_fp, void *tls_area)
Initializes the CPU context.
Definition: epiphany-context-initialize.c:41
The set of registers that specifies the complete processor state.
Definition: score/cpu/arm/rtems/score/cpu.h:671
#define SCORE_EXTERN
The following ensures that all data is declared in the space of the initialization routine for either...
Definition: basedefs.h:81
void * proc_ptr
XXX: Eventually proc_ptr needs to disappear!!!
Definition: basedefs.h:329