Week 1
Question 1: The diagram for traditional programming had Rules and Data In, but what came out?
- Machine Learning
- Bugs
- Answers
- Binary
Question 2: The diagram for Machine Learning had Answers and Data In, but what came out?
- Bugs
- Models
- Rules
- Binary
Question 3: When I tell a computer what the data represents (i.e. this data is for walking, this data is for running), what is that process called?
- Programming the Data
- Categorizing the Data
- Learning the Data
- Labelling the Data
Question 4: What is a Dense?
- A single neuron
- A layer of disconnected neurons
- A layer of connected neurons
- Mass over Volume
Question 5: What does a Loss function do?
- Measures how good the current ‘guess’ is
- Decides to stop training a neural network
- Figures out if you win or lose
- Generates a guess
Question 6: What does the optimizer do?
- Figures out how to efficiently compile your code
- Generates a new and improved guess
- Decides to stop training a neural network
- Measures how good the current guess is
Question 7: What is Convergence?
- A dramatic increase in loss
- The process of getting very close to the correct answer
- A programming API for AI
- The bad guys in the next ‘Star Wars’ movie
Question 8: What does model.fit do?
- It optimizes an existing model
- It determines if your activity is good for your body
- It makes a model fit available memory
- It trains the neural network to fit one set of values to another
Week 2
Question 1: What’s the name of the dataset of Fashion images used in this week’s code?
- Fashion MNIST
- Fashion Data
- Fashion MN
- Fashion Tensors
Question 2: What do the above mentioned Images look like?
- 28×28 Greyscale
- 28×28 Color
- 82×82 Greyscale
- 100×100 Color
Question 3: How many images are in the Fashion MNIST dataset?
- 10,000
- 42
- 70,000
- 60,000
Question 4: Why are there 10 output neurons?
- Purely arbitrary
- To make it train 10x faster
- There are 10 different labels
- To make it classify 10x faster
Question 5: What does Relu do?
- It only returns x if x is less than zero
- It returns the negative of x
- For a value x, it returns 1/x
- It only returns x if x is greater than zero
Question 6: Why do you split data into training and test sets?
- To train a network with previously unseen data
- To make training quicker
- To test a network with previously unseen data
- To make testing quicker
Question 7: What method gets called when an epoch finishes?
- On_training_complete
- on_end
- on_epoch_finished
- on_epoch_end
Question 8: What parameter to you set in your fit function to tell it to use callbacks?
- callback=
- oncallback=
- callbacks=
- oncallbacks=
Week 3
Question 1: What is a Convolution?
- A technique to make images smaller
- A technique to make images bigger
- A technique to isolate features in images
- A technique to filter out unwanted images
Question 2: What is a Pooling?
- A technique to combine pictures
- A technique to make images sharper
- A technique to isolate features in images
- A technique to reduce the information in an image while maintaining features
Question 3: How do Convolutions improve image recognition?
- They make processing of images faster
- They isolate features in images
- They make the image clearer
- They make the image smaller
Question 4: After passing a 3×3 filter over a 28×28 image, how big will the output be?
- 26×26
- 28×28
- 25×25
- 31×31
Question 5: After max pooling a 26×26 image with a 2×2 filter, how big will the output be?
- 13×13
- 56×56
- 26×26
- 28×28
Question 6: Applying Convolutions on top of our Deep neural network will make training:
- Slower
- It depends on many factors. It might make your training faster or slower, and a poorly designed Convolutional layer may even be less efficient than a plain DNN!
- Stay the same
- Faster
Week 4
Question 1: Using Image Generator, how do you label images?
- It’s based on the directory the image is contained in
- It’s based on the file name
- TensorFlow figures it out from the contents
- You have to manually do it
Question 2: What method on the Image Generator is used to normalize the image?
- normalize_image
- rescale
- normalize
- Rescale_image
Question 3: How did we specify the training size for the images?
- The target_size parameter on the validation generator
- The training_size parameter on the training generator
- The training_size parameter on the validation generator
- The target_size parameter on the training generator
Question 4: When we specify the input_shape to be (300, 300, 3), what does that mean?
- There will be 300 images, each size 300, loaded in batches of 3
- Every Image will be 300×300 pixels, with 3 bytes to define color
- There will be 300 horses and 300 humans, loaded in batches of 3
- Every Image will be 300×300 pixels, and there should be 3 Convolutional Layers
Question 5: If your training data is close to 1.000 accuracy, but your validation data isn’t, what’s the risk here?
- No risk, that’s a great result
- You’re overfitting on your training data
- You’re underfitting on your validation data
- You’re overfitting on your validation data
Question 6: Convolutional Neural Networks are better for classifying images like horses and humans because:
- In these images, the features may be in different parts of the frame
- There’s a wide variety of horses
- There’s a wide variety of humans
- All of the above
Question 7: After reducing the size of the images, the training results were different. Why?
- There was less information in the images
- There was more condensed information in the images
- We removed some convolutions to handle the smaller images
- The training was faster