[MUSIC] Hello. In this sequence, you will learn how a forecast is made and how the uncertainty of a forecast is quantified, which any user of forecast has to deal with when making decisions. This is particularly the case in the energy sector. Vilhelm Bjerknes, A Norwegian geophysicist, famous for developing the frontal model in metrology, helped create the first modern method of weather forecasting. He was aware of the considerable work required to solve the equations. He knew that the method proposed was approximate in nature, but nevertheless thought that it would be possible one day to predict a change in weather a day in advance, perhaps even in a week. This is what one of his code relates, which translates precisely what we think today to be the limit of predictability for weather forecasts. The accuracy of initial conditions and the knowledge of the laws of evolution. However, is hard highlights a greater degree of determinism of atmospheric movement than what modern chaos theory assumed later. The evolution of the climate system evolves according to deterministic conservation laws. These are the fundamentals of dynamics, thermodynamics and mass conservation. These laws are well known. In these equations the law members circling blue represents the terms of temporal evolution and transport of quantities like the horizontal vertical components of the wind, temperature, the humidity or any other scalar. The member on the left circled in red represents source or sink terms of the quantity is considered. For example, for temperature the heat sources can be a hot surface or the condensation of water vapor cloud, whether droplets hit sync on the other hand, can because by the evaporation of cloud droplets or body of surface water. In practice, the processes contributing to sources and sinks of momentum for wind hit or humidity are very numerous. The representation of these sources and sinks is not always described by prognostic equations and can result from empirical approaches based on experimentation. These germs are therefore often expressed by a parametric formulation that best represents physical processes. This parametric formulation is therefore source of uncertainty in the resolution of the mathematical resolution of the conservation equation. For example, turbulent flows of momentum, sensible heat or latent heat, our sources, or sinks for wind temperature, humidity, depending on their side. This turbulent flows can be represented by analogy with viscous diffusion. This parliamentarization is widely studied because the most general and the turbulent diffusion K can be determined by experimentation. Certain pyramid realizations add a term of mass flux, allowing to explicitly represent nonlocal mixing by thermal convection. We therefore see that this source or sink term can be represented in different ways, depending on the physical process is considered. So even if the conservation laws are perfectly known, most of the source or sink terms of these equations can only be represented parametricly if the laws are well known, the resolution remains uncertain due to the approximate representation of the source and sink terms. The conservation equations form a nonlinear system. Indeed, the term of transport of the particular derivative is the non linear term of the conservation equations. In a non linear process a small error in the initial conditions grows very quickly and degrades the final results significantly. This phenomenon is called dynamic instability, or chaos. Thus, in view of villain Jack's work, the predictability of the meteorological system is limited by the uncertain representation of certain terms of the evolution equations. And by the non linear nature of these equations which make the resolution sensitive to the initial conditions to forecast the meteorological evolution. It is necessary to solve from initial conditions all the conservation equations by implementing them on a computer for this, we establish an artificial three dimensional mesh of the atmosphere. We virtually cut the geographic area into measure several kilometres on each side. The side of the mesh determines the computing time. We solve the equations inside each box. Creating initial conditions for a weather forecast is a specific research area of whether science known as data simulation. Indeed, it is difficult to determine at a given instant the state of the atmosphere, that is to say, the set of atmospheric variable, such as pressure temperature, humidity, wind of the whole volume with good resolution and good precision. The figures show the special distribution of the measurements collected by commercial flights, surface where the stations and satellite data. We can see that the distribution of data is very heterogeneous and few instruments provide information along the vertical. This information is therefore not sufficient. Indeed, the atmospheric model requires around 10 to the seven values for all the physical field considered at all points of the model. However, the observations are of the other of 10 to the sixth. A simple interpolation is not enough under these conditions. With the use a method called data assimilation. Here's how it works. Data assimilation is a predictor correction method. Let us take a forecast identified by the blue curve and calculated at the previous time step and valid at the instant considered. This forecast is used as a first guess. The observation available during an assimilation window at the instance indicated by the green arrows allowed this first guest to be corrected to better estimate the real state of the atmosphere. This correction called data assimilation takes into account an estimate of the earth's coming from the observations and the first guess. These errors are statistically quantified beforehand, from which an error model is developed for both the observations and the forecast constituting the first guess. This correction of the first guess by observation is called the analysis represented by the red curve informs the initial conditions of the following numerical forecast. However, we know that due to the turbulent nature of the atmospheric environment, the inaccuracy of the measurements and the method of data assimulation. Even very sophisticated, it is impossible to obtain exact initial conditions. So what does the initial conditions on the forecast implies? Take the temperature forecast, the mathematicians and meteorologist Edward Lorenz has clearly shown that uncertainty in the initial state, however small will cause uncertainty in the forecast after a certain period of time, but variable depending on the initial state of the atmosphere. This figure shows the numerous trajectories for forecasting the temperature from a distribution representing the possible values of an initial temperature around most probable value knowing the uncertainties associated with this value. The more the set of predicted temperature values is concentrated around value, the greater the predictability of the temperature. If on the contrary, the distribution of the predicted temperature values is wide or even multi model is suggested in this figure, the less the temperature values predictable. It is on this principle that the assembled forecast now operates, which consists in producing a forecast and sample by slightly disturbing the initial conditions in order to be able to predict the distribution of the possible state of the atmosphere and the level of predictability. The predictability of the system depends on the dominant scales of temporal variability of the process in the various components of the earth system. So if the memory of the atmosphere is only about 10 days, it is in the order of a month for the land surface and from several months to several years for the ocean. To finish a bit of history, Lewis Fry Richardson British mathematicians, meteorologist and psychologists imagine predicting time for primitive atmospheric equations. He estimated that 64,000 people were needed for the forecast to be issued ahead of events. By cutting the globe into a rectangle, agreed a small attempt to predict the evolution of atmospheric pressure gave a very disappointing result. Richardson's error was nested in the resolution of the dispute equation of atmospheric pressure and in the choice of the time step for the resolution of the evolution equations. In 1948, the meteorologist, Jule Charney emitted the hypothesis, making it possible to build the first atmospheric forecasting model on a synoptic scale for the middle attitudes and tested it successfully in March 1950 on the eniac, one of the first electronic computers. The time machine had become a reality. In 1952, Jule Charney wrote Richardson to share his work with him. In this sequence you should be able to list the elements conditioning the predictability of the future state of the atmosphere, namely the knowledge of the laws of evolution and the precision of the initial conditions. You should be able to list the steps in data assimilation, a method of providing the initial conditions for future forecast and explaining the principle of ensemble forecast. Thank you for your attention