
Don’t worry we’ll only be using it to check for and add folders. Which sounds intimidating because you could potentially screw up your computer using it.

We need to write code that automatically makes sense of the data, calculates the COP of the heat pump and plots the data so we can understand it visually. The next step is to process those results. In each of those files we see electricity consumption of the heat pump, the temperature of water in the storage tank, and the air temperature surrounding the water heater. We now have three data files, each containing test results at a specified ambient temperature. You can write the exact same code that I’ll present, run the code, see the results and compare it to results I present.) ( The companion data set is a valuable part of the tutorial process, as it allows you to follow along. Part two introduced the companion data set, and split the data set into multiple files with user-friendly names. If “heat pump water heater,” “coefficient of performance (COP),” and “performance map” don’t mean anything to you, check it out.

The introduction to the tutorial explained the concepts we’re using. This way you know the skills you’re developing are practical and useful. We’re in the process of writing Python scripts that will automatically analyze all your data for you and store it with meaningful, intuitive file names, all while using a real-world example.

You need a way to simplify the process, to make the data set more manageable and to help you keep track of everything. Either way, this much data is hard to manage, hard to make sense of, and even hard for your computer to process. Other times they’ll have hundreds of files, each containing a small amount of data. Scientists often find themselves with large data sets - sometimes in the form of gigabytes worth of data in a single file.
