Is A Function Draw Ratio Electrospinning
Introduction
Back in 2009, deep learning was only an emerging field. Only a few people recognised information technology as a fruitful area of inquiry. Today, it is beingness used for developing applications which were considered difficult or impossible to do till some fourth dimension back.
Speech recognition, paradigm recognition, finding patterns in a dataset, object nomenclature in photographs, character text generation, self-driving cars and many more than are but a few examples. Hence information technology is of import to be familiar with deep learning and its concepts.
In this skilltest, we tested our community on basic concepts of Deep Learning. A total of 1070 people participated in this skill test.
If you missed taking the test, here is your opportunity to look at the questions and check your skill level. If you are just getting started with Deep Learning, hither is a course to assistance you in your journey to Master Deep Learning:
- Certified AI & ML Blackbelt+ Program
Overall Results
Below is the distribution of scores, this will help yous evaluate your functioning:
Yous can access your performance here. More than 200 people participated in the skill test and the highest score was 35. Here are a few statistics about the distribution.
Overall distribution
Mean Score: 16.45
Median Score: 20
Manner Score: 0
It seems like a lot of people started the contest very late or didn't take it across a few questions. I am not completely sure why, simply may be because the subject is advanced for a lot of audience.
If you accept whatever insight on why this is and then, do let us know.
Helpful Resources
Fundamentals of Deep Learning – Starting with Bogus Neural Network
Applied Guide to implementing Neural Networks in Python (using Theano)
A Complete Guide on Getting Started with Deep Learning in Python
Tutorial: Optimizing Neural Networks using Keras (with Paradigm recognition case study)
An Introduction to Implementing Neural Networks using TensorFlow
Questions and Answers
Q1. A neural network model is said to be inspired from the human encephalon.
The neural network consists of many neurons, each neuron takes an input, processes it and gives an output. Here'southward a diagrammatic representation of a real neuron.
Which of the post-obit statement(s) correctly represents a real neuron?
A. A neuron has a single input and a single output merely
B. A neuron has multiple inputs but a unmarried output only
C. A neuron has a single input but multiple outputs
D. A neuron has multiple inputs and multiple outputs
E. All of the above statements are valid
Solution: (E)
A neuron can have a single Input / Output or multiple Inputs / Outputs.
Q2. Beneath is a mathematical representation of a neuron.
The unlike components of the neuron are denoted every bit:
- x1, x2,…, xN: These are inputs to the neuron. These tin either be the actual observations from input layer or an intermediate value from ane of the subconscious layers.
- w1, w2,…,wN: The Weight of each input.
- bi: Is termed as Bias units. These are abiding values added to the input of the activation role corresponding to each weight. It works similar to an intercept term.
- a: Is termed every bit the activation of the neuron which can be represented equally
- and y: is the output of the neuron
Because the above notations, will a line equation (y = mx + c) fall into the category of a neuron?
A. Yeah
B. No
Solution: (A)
A single neuron with no non-linearity can be considered as a linear regression function.
Q3. Let us presume we implement an AND function to a single neuron. Below is a tabular representation of an AND function:
| X1 | X2 | X1 AND X2 |
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| i | 0 | 0 |
| 1 | 1 | ane |
The activation function of our neuron is denoted every bit:
What would be the weights and bias?
(Hint: For which values of w1, w2 and b does our neuron implement an AND office?)
A. Bias = -1.five, w1 = i, w2 = i
B. Bias = 1.5, w1 = 2, w2 = 2
C. Bias = 1, w1 = 1.5, w2 = i.5
D. None of these
Solution: (A)
A.
- f(-1.five*1 + 1*0 + i*0) = f(-ane.5) = 0
- f(-i.5*1 + i*0 + 1*i) = f(-0.v) = 0
- f(-ane.5*1 + 1*i + ane*0) = f(-0.5) = 0
- f(-1.five*1 + 1*1+ 1*1) = f(0.v) = 1
Therefore option A is correct
Q4. A network is created when we multiple neurons stack together. Allow us take an example of a neural network simulating an XNOR part.
You can see that the last neuron takes input from two neurons before it. The activation office for all the neurons is given past:
Suppose X1 is 0 and X2 is one, what will be the output for the above neural network?
A. 0
B. i
Solution: (A)
Output of a1: f(0.five*1 + -1*0 + -1*ane) = f(-0.5) = 0
Output of a2: f(-i.5*1 + 1*0 + 1*1) = f(-0.5) = 0
Output of a3: f(-0.v*ane + 1*0 + ane*0) = f(-0.5) = 0
So the correct answer is A
Q5. In a neural network, knowing the weight and bias of each neuron is the nigh important footstep. If you tin can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would exist the best manner to approach this?
A. Assign random values and pray to God they are correct
B. Search every possible combination of weights and biases till y'all get the all-time value
C. Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values values to make them improve
D. None of these
Solution: (C)
Option C is the description of slope descent.
Q6. What are the steps for using a gradient descent algorithm?
- Calculate error between the actual value and the predicted value
- Reiterate until you discover the best weights of network
- Pass an input through the network and get values from output layer
- Initialize random weight and bias
- Go to each neurons which contributes to the error and change its corresponding values to reduce the error
A. 1, 2, 3, 4, v
B. five, iv, 3, 2, 1
C. 3, 2, 1, 5, 4
D. 4, iii, 1, 5, ii
Solution: (D)
Option D is correct
Q7. Suppose y'all accept inputs every bit x, y, and z with values -2, 5, and -4 respectively. You lot have a neuron 'q' and neuron 'f' with functions:
q = x + y
f = q * z
Graphical representation of the functions is as follows:
What is the gradient of F with respect to ten, y, and z?
(HINT: To calculate slope, y'all must find (df/dx), (df/dy) and (df/dz))
A. (-3,iv,4)
B. (4,4,3)
C. (-4,-four,3)
D. (3,-4,-4)
Solution: (C)
Option C is right.
Q8. Now let's revise the previous slides. We have learned that:
- A neural network is a (crude) mathematical representation of a brain, which consists of smaller components called neurons.
- Each neuron has an input, a processing part, and an output.
- These neurons are stacked together to form a network, which can be used to approximate whatever role.
- To become the all-time possible neural network, we tin use techniques like gradient descent to update our neural network model.
Given higher up is a description of a neural network. When does a neural network model become a deep learning model?
A. When you add more hidden layers and increase depth of neural network
B. When there is college dimensionality of information
C. When the problem is an image recognition problem
D. None of these
Solution: (A)
More than depth means the network is deeper. There is no strict dominion of how many layers are necessary to make a model deep, but nevertheless if in that location are more 2 hidden layers, the model is said to exist deep.
Q9. A neural network can be considered as multiple simple equations stacked together. Suppose we desire to replicate the function for the beneath mentioned determination boundary.
Using two simple inputs h1 and h2
What will be the final equation?
A. (h1 AND Not h2) OR (Non h1 AND h2)
B. (h1 OR Non h2) AND (NOT h1 OR h2)
C. (h1 AND h2) OR (h1 OR h2)
D. None of these
Solution: (A)
As yous can see, combining h1 and h2 in an intelligent way can get yous a circuitous equation easily. Refer Chapter nine of this book
Q10. "Convolutional Neural Networks can perform various types of transformation (rotations or scaling) in an input". Is the statement right True or Fake?
A. Truthful
B. False
Solution: (B)
Data Preprocessing steps (viz rotation, scaling) is necessary before yous give the data to neural network considering neural network cannot do it itself.
Q11. Which of the following techniques perform similar operations every bit dropout in a neural network?
A. Bagging
B. Boosting
C. Stacking
D. None of these
Solution: (A)
Dropout can be seen as an extreme form of bagging in which each model is trained on a unmarried example and each parameter of the model is very strongly regularized past sharing information technology with the corresponding parameter in all the other models. Refer here
Q 12. Which of the following gives not-linearity to a neural network?
A. Stochastic Slope Descent
B. Rectified Linear Unit
C. Convolution office
D. None of the above
Solution: (B)
Rectified Linear unit of measurement is a non-linear activation role.
Q13. In training a neural network, you notice that the loss does not subtract in the few starting epochs.
The reasons for this could be:
- The learning is charge per unit is low
- Regularization parameter is loftier
- Stuck at local minima
What co-ordinate to you are the probable reasons?
A. 1 and 2
B. two and 3
C. 1 and 3
D. Whatever of these
Solution: (D)
The problem can occur due to any of the reasons mentioned.
Q14. Which of the following is true virtually model chapters (where model chapters ways the power of neural network to judge circuitous functions) ?
A. Equally number of hidden layers increase, model capacity increases
B. Every bit dropout ratio increases, model capacity increases
C. Equally learning rate increases, model capacity increases
D. None of these
Solution: (A)
Only choice A is correct.
Q15. If you increase the number of subconscious layers in a Multi Layer Perceptron, the classification error of test data e'er decreases. True or False?
A. Truthful
B. False
Solution: (B)
This is not always true. Overfitting may cause the error to increase.
Q16.You are building a neural network where it gets input from the previous layer every bit well as from itself.
Which of the post-obit architecture has feedback connections?
A. Recurrent Neural network
B. Convolutional Neural Network
C. Restricted Boltzmann Automobile
D. None of these
Solution: (A)
Option A is right.
Q17. What is the sequence of the following tasks in a perceptron?
- Initialize weights of perceptron randomly
- Go to the side by side batch of dataset
- If the prediction does not lucifer the output, alter the weights
- For a sample input, compute an output
A. 1, ii, 3, iv
B. 4, iii, 2, 1
C. three, ane, ii, 4
D. 1, iv, three, 2
Solution: (D)
Sequence D is right.
Q18. Suppose that you have to minimize the cost function by changing the parameters. Which of the post-obit technique could be used for this?
A. Exhaustive Search
B. Random Search
C. Bayesian Optimization
D. Any of these
Solution: (D)
Whatever of the above mentioned technique tin can exist used to change parameters.
Q19. Get-go Order Gradient descent would not work correctly (i.e. may get stuck) in which of the following graphs?
A.
B.
C.
D. None of these
Solution: (B)
This is a classic case of saddle point trouble of gradient descent.
Q20. The beneath graph shows the accuracy of a trained 3-layer convolutional neural network vs the number of parameters (i.e. number of feature kernels).
The trend suggests that as you increase the width of a neural network, the accuracy increases till a certain threshold value, then starts decreasing.
What could exist the possible reason for this decrease?
A. Even if number of kernels increment, simply few of them are used for prediction
B. As the number of kernels increase, the predictive power of neural network decrease
C. As the number of kernels increase, they start to correlate with each other which in turn helps overfitting
D. None of these
Solution: (C)
As mentioned in option C, the possible reason could be kernel correlation.
Q21. Suppose we have one subconscious layer neural network as shown above. The subconscious layer in this network works as a dimensionality reductor. Now instead of using this subconscious layer, we supercede it with a dimensionality reduction technique such every bit PCA.
Would the network that uses a dimensionality reduction technique e'er give same output as network with subconscious layer?
A. Yep
B. No
Solution: (B)
Because PCA works on correlated features, whereas hidden layers work on predictive capacity of features.
Q22. Tin a neural network model the function (y=ane/x)?
A. Yes
B. No
Solution: (A)
Pick A is true, considering activation function can be reciprocal role.
Q23. In which neural internet architecture, does weight sharing occur?
A. Convolutional neural Network
B. Recurrent Neural Network
C. Fully Connected Neural Network
D. Both A and B
Solution: (D)
Option D is right.
Q24. Batch Normalization is helpful because
A. Information technology normalizes (changes) all the input before sending information technology to the next layer
B. It returns back the normalized hateful and standard deviation of weights
C. It is a very efficient backpropagation technique
D. None of these
Solution: (A)
To read more than well-nigh batch normalization, see refer this video
Q25. Instead of trying to reach absolute zero error, we set a metric called bayes mistake which is the error we hope to achieve. What could be the reason for using bayes mistake?
A. Input variables may non contain complete information about the output variable
B. Arrangement (that creates input-output mapping) may be stochastic
C. Limited preparation data
D. All the in a higher place
Solution: (D)
In reality achieving accurate prediction is a myth. So we should hope to achieve an "doable result".
Q26. The number of neurons in the output layer should lucifer the number of classes (Where the number of classes is greater than two) in a supervised learning task. Truthful or False?
A. True
B. Imitation
Solution: (B)
It depends on output encoding. If it is one-hot encoding, then its truthful. Merely you tin can accept 2 outputs for four classes, and take the binary values as iv classes(00,01,x,11).
Q27. In a neural network, which of the following techniques is used to deal with overfitting?
A. Dropout
B. Regularization
C. Batch Normalization
D. All of these
Solution: (D)
All of the techniques can exist used to deal with overfitting.
Q28. Y = ax^2 + bx + c (polynomial equation of degree 2)
Tin can this equation be represented by a neural network of unmarried subconscious layer with linear threshold?
A. Aye
B. No
Solution: (B)
The answer is no because having a linear threshold restricts your neural network and in uncomplicated terms, makes it a consequential linear transformation role.
Q29. What is a dead unit in a neural network?
A. A unit which doesn't update during training past whatever of its neighbor
B. A unit which does not respond completely to whatsoever of the training patterns
C. The unit of measurement which produces the biggest sum-squared error
D. None of these
Solution: (A)
Choice A is correct.
Q30. Which of the following statement is the all-time clarification of early stopping?
A. Railroad train the network until a local minimum in the error function is reached
B. Simulate the network on a test dataset after every epoch of training. Stop training when the generalization error starts to increase
C. Add together a momentum term to the weight update in the Generalized Delta Rule, so that grooming converges more than quickly
D. A faster version of backpropagation, such every bit the `Quickprop' algorithm
Solution: (B)
Option B is correct.
Q31. What if we use a learning rate that's likewise big?
A. Network will converge
B. Network volition not converge
C. Can't Say
Solution: B
Pick B is right because the error rate would become erratic and explode.
Q32. The network shown in Figure ane is trained to recognize the characters H and T equally shown below:
What would exist the output of the network?
-
-
-
- Could be A or B depending on the weights of neural network
Solution: (D)
Without knowing what are the weights and biases of a neural network, we cannot comment on what output it would give.
Q33. Suppose a convolutional neural network is trained on ImageNet dataset (Object recognition dataset). This trained model is then given a completely white epitome as an input.The output probabilities for this input would be equal for all classes. Truthful or False?
A. True
B. False
Solution: (B)
There would be some neurons which are do not activate for white pixels as input. So the classes wont be equal.
Q34. When pooling layer is added in a convolutional neural network, translation in-variance is preserved. True or Simulated?
A. True
B. Imitation
Solution: (A)
Translation invariance is induced when yous use pooling.
Q35. Which slope technique is more advantageous when the information is likewise big to handle in RAM simultaneously?
A. Full Batch Slope Descent
B. Stochastic Gradient Descent
Solution: (B)
Option B is correct.
Q36. The graph represents gradient menses of a 4-hidden layer neural network which is trained using sigmoid activation role per epoch of grooming. The neural network suffers with the vanishing slope problem.
Which of the following statements is true?
A. Subconscious layer 1 corresponds to D, Hidden layer 2 corresponds to C, Hidden layer 3 corresponds to B and Hidden layer iv corresponds to A
B. Hidden layer 1 corresponds to A, Hidden layer 2 corresponds to B, Hidden layer 3 corresponds to C and Hidden layer 4 corresponds to D
Solution: (A)
This is a description of a vanishing gradient trouble. As the backprop algorithm goes to starting layers, learning decreases.
Q37. For a classification task, instead of random weight initializations in a neural network, we set all the weights to null. Which of the post-obit statements is truthful?
A. There volition non be whatever problem and the neural network will train properly
B. The neural network will train simply all the neurons volition end up recognizing the same matter
C. The neural network will not railroad train as there is no cyberspace slope change
D. None of these
Solution: (B)
Option B is correct.
Q38. In that location is a plateau at the start. This is happening considering the neural network gets stuck at local minima before going on to global minima.
To avert this, which of the following strategy should work?
A. Increase the number of parameters, as the network would non become stuck at local minima
B. Decrease the learning rate by x times at the start and and so use momentum
C. Jitter the learning rate, i.eastward. change the learning rate for a few epochs
D. None of these
Solution: (C)
Option C tin exist used to take a neural network out of local minima in which it is stuck.
Q39. For an epitome recognition problem (recognizing a true cat in a photo), which architecture of neural network would be better suited to solve the problem?
A. Multi Layer Perceptron
B. Convolutional Neural Network
C. Recurrent Neural network
D. Perceptron
Solution: (B)
Convolutional Neural Network would be amend suited for image related problems because of its inherent nature for taking into account changes in nearby locations of an image
Q40.Suppose while training, you see this event. The fault suddenly increases after a couple of iterations.
You make up one's mind that there must a trouble with the data. You plot the data and detect the insight that, original data is somewhat skewed and that may exist causing the problem.
What will you do to deal with this challenge?
A. Normalize
B. Apply PCA and and then Normalize
C. Take Log Transform of the data
D. None of these
Solution: (B)
Commencement you would remove the correlations of the data then zippo centre information technology.
Q41. Which of the following is a decision boundary of Neural Network?
A) B
B) A
C) D
D) C
E) All of these
Solution: (E)
A neural network is said to exist a universal function approximator, so it can theoretically represent any decision boundary.
Q42. In the graph below, we detect that the error has many "ups and downs"
Should nosotros be worried?
A. Yes, because this means in that location is a trouble with the learning rate of neural network.
B. No, equally long every bit there is a cumulative subtract in both training and validation fault, we don't need to worry.
Solution: (B)
Option B is correct. In guild to decrease these "ups and downs" endeavour to increment the batch size.
Q43. What are the factors to select the depth of neural network?
- Type of neural network (eg. MLP, CNN etc)
- Input data
- Computation power, i.e. Hardware capabilities and software capabilities
- Learning Charge per unit
- The output function to map
A. one, 2, four, 5
B. ii, three, 4, 5
C. ane, iii, iv, 5
D. All of these
Solution: (D)
All of the above factors are important to select the depth of neural network
Q44. Consider the scenario. The problem you are trying to solve has a small-scale amount of data. Fortunately, y'all take a pre-trained neural network that was trained on a similar problem. Which of the following methodologies would y'all choose to brand utilize of this pre-trained network?
A. Re-train the model for the new dataset
B. Assess on every layer how the model performs and only select a few of them
C. Fine melody the terminal couple of layers only
D. Freeze all the layers except the last, re-train the last layer
Solution: (D)
If the dataset is generally similar, the best method would be to train only the last layer, as previous all layers piece of work as feature extractors.
Q45. Increase in size of a convolutional kernel would necessarily increase the performance of a convolutional network.
A. Truthful
B. False
Solution: (B)
Increasing kernel size would not necessarily increase operation. This depends heavily on the dataset.
End Notes
I promise you lot enjoyed taking the test and you institute the solutions helpful. The test focused on conceptual knowledge of Deep Learning.
We tried to clear all your doubts through this article only if nosotros have missed out on something then let me know in comments below. If you have any suggestions or improvements you recall nosotros should make in the next skilltest, permit u.s.a. know by dropping your feedback in the comments section.
Larn, compete, hack and get hired!
Source: https://www.analyticsvidhya.com/blog/2017/01/must-know-questions-deep-learning/
Posted by: spurrining1960.blogspot.com

0 Response to "Is A Function Draw Ratio Electrospinning"
Post a Comment