From the control theory point of view

From the control theory point of view, to have an effective controller design, modeling of the control system dynamics, which is the relationship between the controlled variable and the manipulated variable, is important. The following equation in the invocation interval is employed to model our controlled system: where is the output of TMU; is the manipulated variable; is the variable, unknown workload factor; is the number of tasks; is the estimated execution time of , and is the reserved time for recovery execution.
To guarantee stability under all conditions, workload factor in Eq. (8) is replaced by its maximum possible value,   [32]. Incorporating , Eq. (8) can be rewritten as follows:
Obviously, the system output, , is inversely proportional to . By replacing instead of , we find a linear model as follows:
Since the system workload, , may vary during runtime, our system is a time-variant system. We can deal with this variability in the same way as . Therefore, we use the maximum possible value of . However, unlike , which is unpredictable, is known to the system. According to the method used in  [5], the term is removed from Eq. (10). The following discrete-time model can be found by taking the transform of Eq. (10): where .

Simulation and experimental results
In this section, we evaluate the performance of our Fault-Tolerant Feedback-based DVS (FTF-DVS) method for different workloads and compare it with several representative DVS schemes. Simulations have been performed using MATLAB, along with a “TrueTime” simulator  [35]. The “TrueTime” simulator is a complete co-simulation tool based on MATLAB/Simulink that supports dynamic voltage scaling and some scheduling policies, such as EDF.

Conclusion and future work
High fault-tolerance against transient faults and low pregnane x receptor consumption are key objectives in the design of real-time embedded systems. In this work, by targeting hard real-time systems, we proposed a control-theoretic energy management method to reduce dynamic energy consumption, without compromising overall system reliability, alongside managing static energy consumption. Total energy reduction is achieved via dynamic voltage scaling (DVS), alongside feedback scheduling. The fault-tolerance is carried out using recovery executions. Our proposed feedback-based DVS method makes the system capable of adapting to unpredictable workload variations, while significantly reducing energy consumption without sacrificing reliability level. In the proposed method, while the available dynamic slack-time is exploited by a feedback-based DVS at runtime to reduce energy consumption, some slack-time is reserved statically for recovery execution in case of faults. Simulation results illustrate that our proposed method not only saves up to 59% (51% in average assuming that WCU is uniformly distributed) energy compared to the WCET-based DVS (wDVS) scheme, but also satisfies hard real-time constraints in the presence of workload fluctuations and faults. In another comparison, although the energy saving of our proposed scheme is not as favorable as control-theoretic DVS (ctDVS), it provides hard real-time guarantees and the harnessing of static energy, unlike ctDVS. One considered candidate for future work is that, in our proposed method, when a task finishes successfully, the static slack time which is reserved for fault-tolerance during the execution of that task is no longer required and, hence, can be reclaimed for more energy-saving. Therefore, a specific slack reclamation scheme can be developed for our proposed technique that can potentially improve its energy saving.

The curse of dimensionality, introduced by Bellman, is one of the most important problems in data classification with large input dimensions. Feature selection and feature extraction are common solutions  [1–3].
In this article, an investigative approach has been defined to use the advantages of feature selection methods. Here, the aim is to use an appropriate way to map the feature vectors to a new space with lower dimensions and then to classify the test data by Nearest Neighbor (NN) and SVM classifiers. Practically, it is observed that in high dimensional problems, some attributes have noisy-values, and this can inadvertently reduce the accuracy rate of classification by affecting the gradient of the mapping hyper plane (a plane in 2-D). Hence, a combination of three parallel component feature reduction algorithms is proposed here to soften these effects, as illustrated in Figure 1. In the first step, a different initial condition is assigned to each tabu search. Then, the outcome of each strategy is given to a voting function to select the best subset by considering the cooperation between them.