banner



Is A Function Draw Ratio Electrospinning

Introduction

Back in 2009, deep learning was only an emerging field. Only a few people recognised information technology as a fruitful area of inquiry. Today, it is beingness used for developing applications which were considered difficult or impossible to do till some fourth dimension back.

Speech recognition, paradigm recognition, finding patterns in a dataset, object nomenclature in photographs, character text generation, self-driving cars and many more than are but a few examples. Hence information technology is of import to be familiar with deep learning and its concepts.

In this skilltest, we tested our community on basic concepts of Deep Learning. A total of 1070 people participated in this skill test.

If you missed taking the test, here is your opportunity to look at the questions and check your skill level. If you are just getting started with Deep Learning, hither is a course to assistance you in your journey to Master Deep Learning:

  • Certified AI & ML Blackbelt+ Program

Overall Results

Below is the distribution of scores, this will help yous evaluate your functioning:

Yous can access your performance here. More than 200 people participated in the skill test and the highest score was 35. Here are a few statistics about the distribution.

Overall distribution

Mean Score: 16.45

Median Score: 20

Manner Score: 0

It seems like a lot of people started the contest very late or didn't take it across a few questions. I am not completely sure why, simply may be because the subject is advanced for a lot of audience.

If you accept whatever insight on why this is and then, do let us know.

Helpful Resources

Fundamentals of Deep Learning – Starting with Bogus Neural Network

Applied Guide to implementing Neural Networks in Python (using Theano)

A Complete Guide on Getting Started with Deep Learning in Python

Tutorial: Optimizing Neural Networks using Keras (with Paradigm recognition case study)

An Introduction to Implementing Neural Networks using TensorFlow

Questions and Answers

Q1. A neural network model is said to be inspired from the human encephalon.

The neural network consists of many neurons, each neuron takes an input, processes it and gives an output. Here'southward a diagrammatic representation of a real neuron.

Which of the post-obit statement(s) correctly represents a real neuron?

A. A neuron has a single input and a single output merely

B. A neuron has multiple inputs but a unmarried output only

C. A neuron has a single input but multiple outputs

D. A neuron has multiple inputs and multiple outputs

E. All of the above statements are valid

Q2. Beneath is a mathematical representation of a neuron.

The unlike components of the neuron are denoted every bit:

  • x1, x2,…, xN: These are inputs to the neuron. These tin either be the actual observations from input layer or an intermediate value from ane of the subconscious layers.
  • w1, w2,…,wN: The Weight of each input.
  • bi: Is termed as Bias units. These are abiding values added to the input of the activation role corresponding to each weight. It works similar to an intercept term.
  • a:  Is termed every bit the activation of the neuron which can be represented equally
  • and y: is the output of the neuron

Because the above notations, will a line equation (y = mx + c) fall into the category of a neuron?

A. Yeah

B. No

Q3. Let us presume we implement an AND function to a single neuron. Below is a tabular representation of an AND function:

X1 X2 X1 AND X2
0 0 0
0 1 0
i 0 0
1 1 ane

The activation function of our neuron is denoted every bit:

What would be the weights and bias?

(Hint: For which values of w1, w2 and b does our neuron implement an AND office?)

A. Bias = -1.five, w1 = i, w2 = i

B. Bias = 1.5, w1 = 2, w2 = 2

C. Bias = 1, w1 = 1.5, w2 = i.5

D. None of these

Q4. A network is created when we multiple neurons stack together. Allow us take an example of a neural network simulating an XNOR part.

You can see that the last neuron takes input from two neurons before it. The activation office for all the neurons is given past:

 Suppose X1 is 0 and X2 is one, what will be the output for the above neural network?

A. 0

B. i

Q5. In a neural network, knowing the weight and bias of each neuron is the nigh important footstep. If you tin can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would exist the best manner to approach this?

A. Assign random values and pray to God they are correct

B. Search every possible combination of weights and biases till y'all get the all-time value

C. Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values values to make them improve

D. None of these

Q6. What are the steps for using a gradient descent algorithm?

  1. Calculate error between the actual value and the predicted value
  2. Reiterate until you discover the best weights of network
  3. Pass an input through the network and get values from output layer
  4. Initialize random weight and bias
  5. Go to each neurons which contributes to the error and change its corresponding values to reduce the error

A. 1, 2, 3, 4, v

B. five, iv, 3, 2, 1

C. 3, 2, 1, 5, 4

D. 4, iii, 1, 5, ii

Q7. Suppose y'all accept inputs every bit x, y, and z with values -2, 5, and -4 respectively. You lot have a neuron 'q' and neuron 'f' with functions:

q = x + y

f = q * z

Graphical representation of the functions is as follows:

 What is the gradient of F with respect to ten, y, and z?

(HINT: To calculate slope, y'all must find (df/dx), (df/dy) and (df/dz))

A. (-3,iv,4)

B. (4,4,3)

C. (-4,-four,3)

D. (3,-4,-4)

Q8. Now let's revise the previous slides. We have learned that:

  • A neural network is a (crude) mathematical representation of a brain, which consists of smaller components called neurons.
  • Each neuron has an input, a processing part, and an output.
  • These neurons are stacked together to form a network, which can be used to approximate whatever role.
  • To become the all-time possible neural network, we tin use techniques like gradient descent to update our neural network model.

Given higher up is a description of a neural network. When does a neural network model become a deep learning model?

A. When you add more hidden layers and increase depth of neural network

B. When there is college dimensionality of information

C. When the problem is an image recognition problem

D. None of these

Q9. A neural network can be considered as multiple simple equations stacked together. Suppose we desire to replicate the function for the beneath mentioned determination boundary.

Using two simple inputs h1 and h2

What will be the final equation?

A. (h1 AND Not h2) OR (Non h1 AND h2)

B. (h1 OR Non h2) AND (NOT h1 OR h2)

C. (h1 AND h2) OR (h1 OR h2)

D. None of these

Q10. "Convolutional Neural Networks can perform various types of transformation (rotations or scaling) in an input". Is the statement right True or Fake?

A. Truthful

B. False

Q11. Which of the following techniques perform similar operations every bit dropout in a neural network?

A. Bagging

B. Boosting

C. Stacking

D. None of these

Q 12. Which of the following gives not-linearity to a neural network?

A. Stochastic Slope Descent

B. Rectified Linear Unit

C. Convolution office

D. None of the above

Q13. In training a neural network, you notice that the loss does not subtract in the few starting epochs.

The reasons for this could be:

  1. The learning is charge per unit is low
  2. Regularization parameter is loftier
  3. Stuck at local minima

What co-ordinate to you are the probable reasons?

A. 1 and 2

B. two and 3

C. 1 and 3

D. Whatever of these

Q14. Which of the following is true virtually model chapters (where model chapters ways the power of neural network to judge circuitous functions) ?

A. Equally number of hidden layers increase, model capacity increases

B. Every bit dropout ratio increases, model capacity increases

C. Equally learning rate increases, model capacity increases

D. None of these

Q15. If you increase the number of subconscious layers in a Multi Layer Perceptron, the classification error of test data e'er decreases. True or False?

A. Truthful

B. False

Q16.You are building a neural network where it gets input from the previous layer every bit well as from itself.

Which of the post-obit architecture has feedback connections?

A. Recurrent Neural network

B. Convolutional Neural Network

C. Restricted Boltzmann Automobile

D. None of these

Q17. What is the sequence of the following tasks in a perceptron?

  1. Initialize weights of perceptron randomly
  2. Go to the side by side batch of dataset
  3. If the prediction does not lucifer the output, alter the weights
  4. For a sample input, compute an output

A. 1, ii, 3, iv

B. 4, iii, 2, 1

C. three, ane, ii, 4

D. 1, iv, three, 2

Q18. Suppose that you have to minimize the cost function by changing the parameters. Which of the post-obit technique could be used for this?

A. Exhaustive Search

B. Random Search

C. Bayesian Optimization

D. Any of these

Q19. Get-go Order Gradient descent would not work correctly (i.e. may get stuck) in which of the following graphs?

A.

B.

C.

D. None of these

Q20. The beneath graph shows the accuracy of a trained 3-layer convolutional neural network vs the number of parameters (i.e. number of feature kernels).


The trend suggests that as you increase the width of a neural network, the accuracy increases till a certain threshold value, then starts decreasing.

What could exist the possible reason for this decrease?

A. Even if number of kernels increment, simply few of them are used for prediction

B. As the number of kernels increase, the predictive power of neural network decrease

C. As the number of kernels increase, they start to correlate with each other which in turn helps overfitting

D. None of these

Q21. Suppose we have one subconscious layer neural network as shown above. The subconscious layer in this network works as a dimensionality reductor. Now instead of using this subconscious layer, we supercede it with a dimensionality reduction technique such every bit PCA.

Would the network that uses a dimensionality reduction technique e'er give same output as network with subconscious layer?

A. Yep

B. No

Q22. Tin a neural network model the function (y=ane/x)?

A. Yes

B. No

Q23. In which neural internet architecture, does weight sharing occur?

A. Convolutional neural Network

B. Recurrent Neural Network

C. Fully Connected Neural Network

D. Both A and B

Q24. Batch Normalization is helpful because

A. Information technology normalizes (changes) all the input before sending information technology to the next layer

B. It returns back the normalized hateful and standard deviation of weights

C. It is a very efficient backpropagation technique

D. None of these

Q25. Instead of trying to reach absolute zero error, we set a metric called bayes mistake which is the error we hope to achieve. What could be the reason for using bayes mistake?

A. Input variables may non contain complete information about the output variable

B. Arrangement (that creates input-output mapping) may be stochastic

C. Limited preparation data

D. All the in a higher place

Q26. The number of neurons in the output layer should lucifer the number of classes (Where the number of classes is greater than two) in a supervised learning task. Truthful or False?

A. True

B. Imitation

Q27. In a neural network, which of the following techniques is used to deal with overfitting?

A. Dropout

B. Regularization

C. Batch Normalization

D. All of these

Q28. Y = ax^2 + bx + c (polynomial equation of degree 2)

Tin can this equation be represented by a neural network of unmarried subconscious layer with linear threshold?

A. Aye

B. No

Q29. What is a dead unit in a neural network?

A. A unit which doesn't update during training past whatever of its neighbor

B. A unit which does not respond completely to whatsoever of the training patterns

C. The unit of measurement which produces the biggest sum-squared error

D. None of these

Q30. Which of the following statement is the all-time clarification of early stopping?

A. Railroad train the network until a local minimum in the error function is reached

B. Simulate the network on a test dataset after every epoch of training. Stop training when the generalization error starts to increase

C. Add together a momentum term to the weight update in the Generalized Delta Rule, so that grooming converges more than quickly

D. A faster version of backpropagation, such every bit the `Quickprop' algorithm

Q31. What if we use a learning rate that's likewise big?

A. Network will converge

B. Network volition not converge

C. Can't Say

Q32. The network shown in Figure ane is trained to recognize the characters H and T equally shown below:

What would exist the output of the network?

  1. Could be A or B depending on the weights of neural network

Q33. Suppose a convolutional neural network is trained on ImageNet dataset (Object recognition dataset). This trained model is then given a completely white epitome as an input.The output probabilities for this input would be equal for all classes. Truthful or False?

A. True

B. False

Q34. When pooling layer is added in a convolutional neural network, translation in-variance is preserved. True or Simulated?

A. True

B. Imitation

Q35. Which slope technique is more advantageous when the information is likewise big to handle in RAM simultaneously?

A. Full Batch Slope Descent

B. Stochastic Gradient Descent

Q36. The graph represents gradient menses of a 4-hidden layer neural network which is trained using sigmoid activation role per epoch of grooming. The neural network suffers with the vanishing slope problem.

Which of the following statements is true?

A. Subconscious layer 1 corresponds to D, Hidden layer 2 corresponds to C, Hidden layer 3 corresponds to B and Hidden layer iv corresponds to A

B. Hidden layer 1 corresponds to A, Hidden layer 2 corresponds to B, Hidden layer 3 corresponds to C and Hidden layer 4 corresponds to D

Q37. For a classification task, instead of random weight initializations in a neural network, we set all the weights to null. Which of the post-obit statements is truthful?

A. There volition non be whatever problem and the neural network will train properly

B. The neural network will train simply all the neurons volition end up recognizing the same matter

C. The neural network will not railroad train as there is no cyberspace slope change

D. None of these

Q38. In that location is a plateau at the start. This is happening considering the neural network gets stuck at local minima before going on to global minima.

To avert this, which of the following strategy should work?

A. Increase the number of parameters, as the network would non become stuck at local minima

B. Decrease the learning rate by x times at the start and and so use momentum

C. Jitter the learning rate, i.eastward. change the learning rate for a few epochs

D. None of these

Q39. For an epitome recognition problem (recognizing a true cat in a photo), which architecture of neural network would be better suited to solve the problem?

A. Multi Layer Perceptron

B. Convolutional Neural Network

C. Recurrent Neural network

D. Perceptron

Q40.Suppose while training, you see this event. The fault suddenly increases after a couple of iterations.

You make up one's mind that there must a trouble with the data. You plot the data and detect the insight that, original data is somewhat skewed and that may exist causing the problem.

What will you do to deal with this challenge?

A. Normalize

B. Apply PCA and and then Normalize

C. Take Log Transform of the data

D. None of these

Q41. Which of the following is a decision boundary of Neural Network?

A) B

B) A

C) D

D) C

E) All of these

Q42. In the graph below, we detect that the error has many "ups and downs"

Should nosotros be worried?

A. Yes, because this means in that location is a trouble with the learning rate of neural network.

B. No, equally long every bit there is a cumulative subtract in both training and validation fault, we don't need to worry.

Q43. What are the factors to select the depth of neural network?

  1. Type of neural network (eg. MLP, CNN etc)
  2. Input data
  3. Computation power, i.e. Hardware capabilities and software capabilities
  4. Learning Charge per unit
  5. The output function to map

A. one, 2, four, 5

B. ii, three, 4, 5

C. ane, iii, iv, 5

D. All of these

Q44. Consider the scenario. The problem you are trying to solve has a small-scale amount of data. Fortunately, y'all take a pre-trained neural network that was trained on a similar problem. Which of the following methodologies would y'all choose to brand utilize of this pre-trained network?

A. Re-train the model for the new dataset

B. Assess on every layer how the model performs and only select a few of them

C. Fine melody the terminal couple of layers only

D. Freeze all the layers except the last, re-train the last layer

Q45. Increase in size of a convolutional kernel would necessarily increase the performance of a convolutional network.

A. Truthful

B. False

End Notes

I promise you lot enjoyed taking the test and you institute the solutions helpful. The test focused on conceptual knowledge of Deep Learning.

We tried to clear all your doubts through this article only if nosotros have missed out on something then let me know in comments below. If you have any suggestions or improvements you recall nosotros should make in the next skilltest, permit u.s.a. know by dropping your feedback in the comments section.

Larn, compete, hack and get hired!

Source: https://www.analyticsvidhya.com/blog/2017/01/must-know-questions-deep-learning/

Posted by: spurrining1960.blogspot.com

0 Response to "Is A Function Draw Ratio Electrospinning"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel