免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1739 | 回复: 0
打印 上一主题 下一主题

[新手入门] 嵌入式操作系统(一) [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2005-06-09 17:43 |只看该作者 |倒序浏览

原文是Carol发表在LU,http://www.loveunix.net/bbs/index.php?showtopic=28788&st=0 上的,这儿三节是原文最后三部分的翻译,在这儿帖出原文和我的译文。
Source:
RTOS Basics: The Task Model (Multitasking)
This section describes how tasks (processes) are modeled inside a RTOS.
In a real time system a problem is subdivided into several parts, and each part is executed according to its real-time requirements. Each of these parts can be considered as a 'task' as shown in Figure 3.
Figure 1-3: A problem can be broken down into n tasks that are processed individually
Because of real time requirements, these tasks have to be processed simultaneously, a sequential processing would not meet their timing requirements. While simultaneous task execution without a RTOS would require several CPUs (one CPU per task). Only one CPU is required for simultaneous task execution with a RTOS if the processing capacity of this CPU is enough for the real time requirements of the application. In this case the tasks compete against each other for the processor and for other hardware resources (e.g. memory, or an I/O port).
To organize the task competition for the processor, a task-state model defines task states at any point in the system runtime. It also defines when and why which task states change (task state transitions). Each RTOS defines such a task-state model.
As an example, the OSEK/VDX OS task-state model is shown in Figure 4. It has 3 main states for a task and one optional state.
Figure 1-4: Task states and transitions
The task states and transitions shown in Figure 4 are:
Suspended – This is the default state of a task at the system start. In this state the task is inactive. Through an activation, with a RTOS function, the task transitions into the Ready state. Ready – The task is active in the Ready state and waits to get the Running state. the scheduler, part of the RTOS, decides when the transition to the Running state will take place, it can also set the task back to the Ready state, dependent on special algorithms and states of other tasks. Running – In the running state the task gets the processor to execute the task instructions, only one task can be in this state at a point in time. In this state, a RTOS function (service) can transit the task back to the Suspended state. Doing this, the task stops execution and deactivates. Another RTOS service can be used to change the task into the Waiting state, where the task also stops execution, but remains active. Waiting – A particular task waits on an event (other RTOS service) in this state. If this event is set, the task changes to the Ready state. This is an optional state in OSEK/VDX OS tasks. RTOS Basics: Scheduling Tasks
In order to efficiently coordinate task competition for the processor, a scheduler is used to decide which task the processor executes and when.
There are two basic scheduling strategies: preemptive and non-preemptive (cooperative) scheduling.
In a preemptive system, a task can be preempted (i.e.interrupted) during its execution time, meaning the scheduler can switch to another task at any time dependant on the system stage. Preemption is good for tasks with long execution times; more important tasks can preempt less important tasks in order to efficiently use the existing processor capacity.
In a non-preemptive system, each task occupies the CPU for as long as it executes. A task switch does not occur unless the task in control voluntarily relinquishes control of the processor (i.e. terminates).
Two basic types of scheduling exist: static scheduling (time control) and dynamic scheduling (event control). With static scheduling, there is a predefined sequence in which the tasks execution will be performed. With dynamic scheduling the decision to execute a task is determined at run time based on the state of the system. The scheduler adapts to the current task situation. With dynamic scheduling the processor capacity is more efficiently used because activities are only started when necessary (when an outside event happened), and is not dependent on a static schedule.
Possible Scheduling Methods
There are many schemes for scheduling tasks. In this section we will look at three common ones: priority control, time slice, and first in first out (FIFO). Multiple schemes could be combined in a operating system.

In a priority-controlled scheduler, each task is assigned a priority by the operating system depending on its importance. Priorities allow the developer to control how fast a task is getting runtime or how often a task is running. This results in higher priority tasks being completed faster . In a possible implementation, the tasks are placed into different priority queues (in one queue all tasks have the same priority). Tasks are scheduled from the head of a given queue only if all queues of higher priority are empty. Within each of the priority queues, tasks must be scheduled as well. Since priority is the same, some other mechanism must be used to determine the ordering. This could be FIFO or some other technique. Typically, priority-control is combined with dynamic scheduling, the order of execution is not static, and combined with preemptive scheduling to allow preemption for higher priority tasks.

Time slice (also known as 'round robin') is the next method of scheduling. A small unit of time, called a time slice or quantum, is defined for each task to run. In the easiest case all time slices have the same duration, but they can also have different durations. All tasks able to run are kept in a circular queue, activated tasks are added to the tail of the queue. The CPU scheduler goes around this queue, allocating the CPU to each task for a time interval of one time slice. The end of a time slice can be a hard deadline and the task gets terminated, or the task will be stopped (preempted) and might finish running in the next time slice for this task at the tail of the queue. If the task finishes before the end of the time slice, the task itself releases the CPU voluntarily. In either case, the CPU scheduler assigns the CPU to the next task in the queue. Time-slicing is often used when there are multiple tasks of equal priority and those tasks are competing for the CPU. Time slice is one of the simplest and most widely used scheduling algorithms, but such scheduling does not use the processor capacity in the most efficient way. This is because the CPU is always running idle when a task is terminated before the end of the time slice and before the next task gets the CPU.

FIFO (First In First Out or Fist Come First Serve) queuing is the most basic queue scheduling discipline. In FIFO queuing, all tasks are treated equally by placing them into a single queue. They are serviced in the same order as they were placed into the queue. This is a very simple scheduling mechanism that is appropriate in less-complex or order-dependent systems.

An OSEK/VDX OS example using a priority based, dynamic scheduling: The software developer determines task execution sequence via task priorities and selected scheduling mechanisms. Only activated tasks become the "ready" status and get entered into the priority queues. Figure 5 illustrates a series of tasks being performed by a processor where the order of task execution is defined by the scheduler dependent on priority and activation order.
Fig 1-5. Scheduler and task priorities using FIFO (First-In-First-Out) order to process tasks.
In OSEK/VDX OS tasks can be either preemptive or non-preemptive, depending on the task configuration
RTOS Basics: Synchronizing Resource access
A RTOS typically defines a resource management to synchronize tasks while accessing resources.
The definition of a resource in this context includes both hardware (e.g. memory, I/O port) and software (global functions and global variables). Resources are accessible to all tasks in a real time system, but obviously care must be taken to ensure that different tasks don’t attempt to use the same resource in conflicting ways. In preemptive systems, tasks can be preempted during a resource access. When that happens, another task might access the same resource and creates an inconsistent state because the resource access of the other task might be incomplete. Resource managers prevent the simultaneous access to a resource in preemptive systems by synchronizing the tasks to avoid this situation. Like task scheduling, there are several protocols for controlling resources. In this section we discuss three common ones: semaphore protocol, priority ceiling protocol, and highest locker's priority protocol.
Semaphore Protocol
With the semaphore method, a task that is about to use a resource sets a flag (semaphore) that says the resource is in use. A task can only get a resource if no other task holds this resource (assignment in FIFO-principle). While this is a valid, simple, and straightforward control system, it does have known problems. The problems with semaphores are priority inversion and deadlocks.






































<font style="BACKGROUND-COLOR: #ffffcc" face="tahoma,a

本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/4274/showart_30107.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP