Projects

As a Principal Investigator (PI):

Adaptive Optical Systems

Development of Fixed-Order H_infinity sub-Optimal Controller for Adaptive Optical Systems:

Adaptive optics are used to enhance the capability of optical systems by actively compensating for aberrations. These aberrations, such as atmospheric turbulence, optical fabrication errors, thermally induced distortions, or laser device aberrations, reduce the peak intensity and smear an image or a laser beam propagating to a target. The twinkling of stars or distorted images across a paved road on a hot summer day is caused by turbulence in the atmosphere. Distortions like these were corrected by adaptive optics for a long time. The principal uses for adaptive optics are improving image quality in optical and infrared astronomical telescopes, imaging and tracking rapidly moving space objects, and compensating for laser beam distortion through the atmosphere. Adaptive optics system consists of a wave front sensor, deformable mirror, and a control unit. The basic operating principle of adaptive optics system is detection, calculation and activation. First the distortion in the wavefront is determined by the sensors. Then, the control unit derive DM up to takes the shape in the reverse form of distorted wavefront and the wave reflected front face of the mirror would be corrected.

Many problems particularly (long distance secure wireless communication, laser weapons, imaging systems etc.) needed for adaptive optics systems in the defense industry, already cannot be produced in our country at the moment. The main reason is that the system can not be easily controlled with conventional methods or the inability to reach the desired success rate with these methods. For example, because of absence the adaptive optic system in laser damage or blinding system, manufacturers are forced to enhance their laser power. Sufficient energy density to destroy the target cannot be formed without eliminated atmospheric distortions. In order to obtain an applicable control system, the problem needs to solve as an optimization problem with a proper definition of the performance index. 

Fixed-Order Controller

Reducing the Conservatism on the Design of Fixed Order H_infinity Controller: 


In order to achieve the desired performance for any dynamic system such as disturbance rejection, maximum overshoot, settling time, etc. we can use the H∞ control with the constraint on the location of the system’s poles. In the standard H∞ control theory, the designed controller has the same order as the system, called a full-order controller in literature. As a result of this designing constraint, the degree of the closed-loop system is increased twofold. In the industry, engineers avoid using full-order controllers due to their high sensitivity to changes in system’s parameters. Furthermore, the usage of the full-order controllers obtained by using conventional methods, is not easy work to accomplish in the embedded system. Although conventional methods seem to be feasible in theory for low-order systems, it may not be easy in real life due to the necessity of the weighted filters located at the performance outputs, so this increases the degree of the system. As a matter of this fact, the design problem of a fixed-order controller for computer-aided systems is one of the most challenging topics for control engineers.


The main difficulty in the development of fixed-order controllers based on the convex approaches is that the associated stability region in the space of coefficients of the closed-loop characteristic polynomial is non-convex in general. In order to cope with this problem, inner and outer convex approximation techniques are used.


The main contribution of our work is deriving the explicit equation of the convex body’s edges in terms of coefficients of the central polynomial which gives opportunities to find an adequate central polynomial for the problem and to reduce the conservatism of fixed order controller.

In addition to reducing the conservatism on the design technique of fixed order controller, this project also shows a strategy to tune the gains of PID controller for achieving the least conservative robust sub-optimal H∞ controller.




As a Researcher:

Dwell Time Optimisation

Determination of Stability and Stability Conditions for Linear and Nonlinear Discrete-Time Bi-Model Systems, Controller Design and Implementation:

There is a dwell time between switching in many applications such as changing road conditions (dry, wet, dirt) of a car on the road, or different operating conditions in the flight envelope. As a result, stability analysis and stabilisation of switched systems with dwell time are becoming increasingly popular.



Active Suspension System

The constructed system in laboratory consists of two main blocks which are cascaded one above the other. The computer controlled subjacent block is used as a vibrator which simulates the desired road-profile for the topmost system. On the other hand, the topmost system is used as a prototype for the active/pasive quarter-car suspension system. Both systems can be real-time controlled through a Quanser Q8 data acquisition and control card in connection with MATLAB/SIMULINK. The application of various active and passive controllers such as optimal H_infinity, robust and optimal H_infinity, optimal LPV L2 controller, Homogenous-Polynomial-Parameter-Dependent (HPPD) L2 controller which focuses on the nonlinearity originated from the actuator saturation are also examined on the designed system. Experimental results clearly demonstrate that the designed controller techniques for LPV systems are very successful.

As a Technical Consultant:

Development of the optimal motion planar for SIMATIC Robot Itegrator App


Irrigation Optimisation: 

Three versions of solutions were presented during the development of this product. These are conventional, hybrid and deep learning-based probabilistic methods. 

 

In the lower layer of the conventional approach, the clustering algorithm over the features and the detection algorithm of the active root region is placed. The data obtained from the soil sensors and the weather station of the fields are subjected to descriptive analysis. Afterwards, features are created for each soil layer with expert opinion. These features are the inputs of our hierarchical clustering algorithm which provides the active layer of soil. In the main layer of the conventional approach, the characteristic signatures of the active region in each field are obtained by statistical machine learning by using the past Daily Depletion of Soil Moisture (DDSM) values. Using this signature and the error variance in the historical data, the algorithm schedules the irrigation by detecting anomalies within the active irrigation window.

 

In the hybrid approach, weather and soil-based irrigation scheduling algorithms are combined. In this algorithm, LSTM acts as our time series forecasting algorithm again to estimate Kr(t) function. In addition, the elasticity of each component in the water balance formula is periodically improved with a two-stage regression algorithm.

 


The main purpose of our deep learning-based approach is to train a deep Q(LSTM)-Network model. This structure mainly depends on the Markov Decision Process. In our problem, there is a simple decision to be taken by the farmer in irrigation scheduling. How many hours will he/she irrigate the field tomorrow? This duration can vary from 0 to 24 hours. By making this choice, the farmer incorporates probability into her/his decision-making process. Perhaps there's a 45% chance of 5mm rain, which can increase the moisture level of the soil. If the irrigation system is too old or water or electricity sources are insufficient or unstable, it may break down the irrigation. This is certainly a large probabilistic factor. On the other hand, there are deterministic costs, for instance, the cost of water and electricity as well as deterministic rewards, like keeping the soil moisture level in the 50%-75% depletion band or improving fruit calibration in the long run. These types of problems in which an agent must balance probabilistic and deterministic rewards and costs are common in decision-making. Markov Decision Processes are used to model these types of optimization problems.

 

First of all, two LSTM models will be built into the algorithm. One of the models will be estimating the Total Moisture in Soil Next Day, and the second will be scoring the success of irrigation based on environmental conditions during a season. In the developing agent environment stage, LSTM models take the current state (past climate data) and action (irrigation amount) after training, and then the next state and reward will be calculated. These values will be used as an attribute in the Deep Reinforcement Learning (DRL) training environment. In the agent training stage, DRL takes an action to schedule the next irrigation considering the current state of the field and this action will be evaluated in a function called the Q-value function. DRL takes the current status and the action chosen by the agent and displays the next status and reward. This interaction between the DRL agent and the training environment repeats until the DRL agent approaches an optimal strategy for selecting the next day's irrigation amount.