# Dynamic Data Analysis

## Contents

## Teaching data literacy: Module 4

Analysis, Modeling & Simulation: Dynamic data analysis

Dynamic data analysis

Dynamic analysis examines changes in data collected over time or from repeated trials. Many scientific inquiries use computer programs to create hypothetical data based on models that describe a physical phenomenon like the weather, or a sporting event. The process of creating such datasets is called simulation. An introduction to modeling and simulation is an appropriate end point for a high school or undergraduate course in data literacy. For students in higher education, especially in the natural and social sciences, it is just the beginning.

**Learning outcome**

- Advanced students high school and above) learn basic modeling principles and simulation techniques

This activity has five steps:

- Software selection
- Developing models and algorithms
- Programming and automation
- Simulation
- Evaluation

First, watch the following video provides a brief introduction to modeling and simulation.

Software selection

**Classroom preparation**- Unlike elementary statistics that one can compute by hand or with a calculator for small datasets, this level of complexity requires technology, not because of the size of a single dataset, but because modeling and simulation involve the creation of very large numbers of datasets. Many specialized software applications perform statistical analysis. In addition, many general math applications and even some specialized programming languages contain libraries of statistical programs. Teachers must familiarize themselves with the computational resources available to them before they plan lessons to teach modeling and simulation, and then plan them in ways supported by those resources. In addition, instructors must allow time for students to learn how to use these computational tools.

Developing models and algorithms

**Class content**- As shown in the video introduction to this section of the course, models are simple constructions that aim to imitate an actual phenomenon that one wishes to study. In this context, most models are mathematical representations of some real or imagined activity. These models aim to simulate reality by incorporating random behavior. Video games, for example, model all kinds of behavior, human and otherwise. Given the familiarity of many students with such games, it is probably the best way to introduce them to the concept of modeling. The model is the basic structure of the game. The algorithms are the rules that produce the behavior they observe in the game. When games record their behavior, as I imagine most do, they create datasets. Datasets from games designed to mimic natural or human phenomena may be compared to data on those actual phenomena to determine how realistic or not a game is. This is the objective of most modeling and simulation exercises. Unfortunately, not all inquiries come on the form of a game, but at their most fundamental levels, they all behave like one. The uncertainty introduced by random behavior of variables creates variability in outcomes. The accuracy of the parameters that govern the behavior of the random variables determines how realistic the models are.

Programming and automation

- Models and algorithms must be programmed to be simulated. Unless programming itself is a learning objective, in classroom settings, it is important to focus on projects for which programming exists. Not long ago, that was not easily done, but it is relatively easy to do today with software that is often available from an open source, i.e. free.

Simulation

- Simulation is the process of running a program that executes a model repeatedly to create many datasets that describe a phenomenon. It is from statistical analysis of these datasets that analysts can evaluate the accuracy of a model as a surrogate for the real thing it aims to describe.

Evaluation

- All the comments made above about static analysis apply equally to dynamic analysis. As the diagram at the top of this section implies, the process of evaluation is a step in an iterative cycle. It is important for students to understand that the process of inquiry never ends. Results of simulations are used initially to understand how to improve models and algorithms. Even when no further improvement seems possible, it remains important to continue to test them against the natural phenomena they model to observe changes in the natural environment.

Go back to Module 1 of *Teaching data literacy*: Idea formation & abstraction

Go back to Module 2 of *Teaching data literacy*: Data organization

Go back to Module 3 of *Teaching data literacy*: Static data analysis

Return to home page of *Teaching data literacy*