Week 1
Question 1: What is an example of a Univariate time series?
- Hour by hour weather
- Baseball scores
- Fashion items
- Hour by hour temperature
Question 2: What is an example of a Multivariate time series?
- Baseball scores
- Hour by hour temperature
- Hour by hour weather
- Fashion items
Question 3: What is imputed data?
- A good prediction of future data
- A bad prediction of future data
- A projection of unknown (usually past or missing) data
- Data that has been withheld for various reasons
Question 4: A sound wave is a good example of time series data
- False
- True
Question 5: What is Seasonality?
- Data that is only available at certain times of the year
- A regular change in shape of the data
- Weather data
- Data aligning to the 4 seasons of the calendar
Question 6: What is a trend?
- An overall consistent flat direction for data
- An overall consistent downward direction for data
- An overall consistent upward direction for data
- An overall direction for data regardless of direction
Question 7: In the context of time series, what is noise?
- Sound waves forming a time series
- Data that doesn’t have a trend
- Data that doesn’t have seasonality
- Unpredictable changes in time series data
Question 8: What is autocorrelation?
- Data that follows a predictable shape, even if the scale is different
- Data that doesn’t have noise
- Data that automatically lines up in trends
- Data that automatically lines up seasonally
Question 9: What is a non-stationary time series?
- One that has a constructive event forming trend and seasonality
- One that has a disruptive event breaking trend and seasonality
- One that is consistent across all seasons
- One that moves seasonally
Week 2
Question 1: What is a windowed dataset?
- A consistent set of subsets of a time series
- There’s no such thing
- The time series aligned to a fixed shape
- A fixed-size subset of a time series
Question 2: What does ‘drop_remainder=true’ do?
- It ensures that the data is all the same shape
- It ensures that all data is used
- It ensures that all rows in the data window are the same length by cropping data
- It ensures that all rows in the data window are the same length by adding data
Question 3: What’s the correct line of code to split an n column window into n-1 columns for features and 1 column for a label
- dataset = dataset.map(lambda window: (window[n-1], window[1]))
- dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
- dataset = dataset.map(lambda window: (window[-1:], window[:-1]))
- dataset = dataset.map(lambda window: (window[n], window[1]))
Question 4: What does MSE stand for?
- Mean Slight error
- Mean Squared error
- Mean Series error
- Mean Second error
Question 5: What does MAE stand for?
- Mean Average Error
- Mean Advanced Error
- Mean Absolute Error
- Mean Active Error
Question 6: If time values are in time[], series values are in series[] and we want to split the series into training and validation at time 1000, what is the correct code?
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
time_train = time[split_time]
x_train = series[split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time]
x_valid = series[split_time]
time_train = time[split_time]
x_train = series[split_time]
time_valid = time[split_time]
x_valid = series[split_time]
Question 7: If you want to inspect the learned parameters in a layer after training, what’s a good technique to use?
- Run the model with unit data and inspect the output for that layer
- Decompile the model and inspect the parameter set for that layer
- Assign a variable to the layer and add it to the model using that variable. Inspect its properties after training
- Iterate through the layers dataset of the model to find the layer you want
Question 8: How do you set the learning rate of the SGD optimizer?
- Use the lr property
- You can’t set it
- Use the Rate property
- Use the RateOfLearning property
Question 9: If you want to amend the learning rate of the optimizer on the fly, after each epoch, what do you do?
- Use a LearningRateScheduler and pass it as a parameter to a callback
- Callback to a custom function and change the SGD property
- Use a LearningRateScheduler object in the callbacks namespace and assign that to the callback
- You can’t set it
Week 3
Question 1: If X is the standard notation for the input to an RNN, what are the standard notations for the outputs?
- Y
- H
- Y(hat) and H
- H(hat) and Y
Question 2: What is a sequence to vector if an RNN has 30 cells numbered 0 to 29
- The Y(hat) for the first cell
- The total Y(hat) for all cells
- The Y(hat) for the last cell
- The average Y(hat) for all 30 cells
Question 3: What does a Lambda layer in a neural network do?
- Changes the shape of the input or output data
- There are no Lambda layers in a neural network
- Pauses training without a callback
- Allows you to execute arbitrary code while training
Question 4: What does the axis parameter of tf.expand_dims do?
- Defines the dimension index to remove when you expand the tensor
- Defines the axis around which to expand the dimensions
- Defines if the tensor is X or Y
- Defines the dimension index at which you will expand the shape of the tensor
Question 5: A new loss function was introduced in this module, named after a famous statistician. What is it called?
- Hubble loss
- Hawking loss
- Huber loss
- Hyatt loss
Question 6: What’s the primary difference between a simple RNN and an LSTM
- LSTMs have a single output, RNNs have multiple
- LSTMs have multiple outputs, RNNs have a single one
- In addition to the H output, RNNs have a cell state that runs across all cells
- In addition to the H output, LSTMs have a cell state that runs across all cells
Question 7: If you want to clear out all temporary variables that tensorflow might have from previous sessions, what code do you run?
- tf.cache.clear_session()
- tf.keras.backend.clear_session()
- tf.keras.clear_session
- tf.cache.backend.clear_session()
Question 8: What happens if you define a neural network with these two layers?
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(1),
- Your model will fail because you have the same number of cells in each LSTM
- Your model will fail because you need return_sequences=True after the first LSTM layer
- Your model will compile and run correctly
- Your model will fail because you need return_sequences=True after each LSTM layer
Week 4
Question 1: How do you add a 1 dimensional convolution to your model for predicting time series data?
- Use a 1DConvolution layer type
- Use a Conv1D layer type
- Use a Convolution1D layer type
- Use a 1DConv layer type
Question 2: What’s the input shape for a univariate time series to a Conv1D?
- []
- [None, 1]
- [1]
- [1, None]
Question 3: You used a sunspots dataset that was stored in CSV. What’s the name of the Python library used to read CSVs?
- CommaSeparatedValues
- PyFiles
- CSV
- PyCSV
Question 4: If your CSV file has a header that you don’t want to read into your dataset, what do you execute before iterating through the file using a ‘reader’ object?
- reader.next
- reader.ignore_header()
- reader.read(next)
- next(reader)
Question 5: When you read a row from a reader and want to cast column 2 to another data type, for example, a float, what’s the correct syntax?
- float f = row[2].read()
- You can’t. It needs to be read into a buffer and a new float instantiated from the buffer
- Convert.toFloat(row[2])
- float(row[2])
Question 6: What was the sunspot seasonality?
- 11 years
- 11 or 22 years depending on who you ask
- 4 times a year
- 22 years
Question 7: After studying this course, what neural network type do you think is best for predicting time series like our sunspots dataset?
- RNN / LSTM
- DNN
- Convolutions
- A combination of all of the above
Question 8: Why is MAE a good analytic for measuring accuracy of predictions for time series?
- It punishes larger errors
- It biases towards small errors
- It only counts positive errors
- It doesn’t heavily punish larger errors like square errors do