ExoMiner: NASA’s Deep Neural Network of 2021

Sagar Sikchi
9 min readJan 12, 2022

--

(Image Source: ExoMiner, a new machine learning method that helped discover 301 exoplanets)
(Image Source: ExoMiner, a new machine learning method that helped discover 301 exoplanets)

Have you heard that NASA (National Aeronautics and Space Administration) recently discovered new 301 unknown Exoplanets with the help of a Deep Neural Network called ExoMiner? Probably you don’t. Some of the people even don’t know about the terms Exoplanet and ExoMiner. Well, no need to bother, Because this article will clear your all doubts about Exoplanets and ExoMiner? So, let’s jump into the action.

In space, it is estimated that on average, there is at least one planet for every star present in the galaxy. According to NASA Science, Galaxy is nothing but the collection of gas, dust, stars, and their solar systems including planets, which all are held together by Gravitational Force. That means that there are nearly billions of planets in our galaxy (the Milky Way) alone outside the Sun’s Solar System. These planets are known as Exoplanets.

So, what is the ExoMiner, and what is this NASA’s mission to find Exoplanets?

NASA was searching for exoplanets with the help of its Kepler Spacecraft which observed nearly thousands of stars in space for identification of the possible number of exoplanets sneaking around. The purpose of Kelper Spacecraft was to look for temporary decreases in the brightness of the stars which would have possibly been caused by an exoplanet. NASA has ended this mission in 2018, but there is still a possibility to find a lot of data for scientists to read and discover new unknown exoplanets. This is a very tedious task, and that’s why ExoMiner comes into the picture.

(Image Source: New Deep Learning Method Adds 301 Planets to Kepler’s Total Count)
(Image Source: New Deep Learning Method Adds 301 Planets to Kepler’s Total Count)

Deep neural networks are the methods used in machine learning. DNNs (Deep Neural Networks) automatically learn a task with the use of data provided to them. ExoMiner is also one of the newest deep neural networks that support NASA’s Supercomputer known as the Pleiades, and this new DNN is able to differentiate real exoplanets from other types of imposters, or “false positives” (non-likely to exoplanets). The design of ExoMiner has been inspired by different tests and properties used by the experts for confirmation of new exoplanets. Also, ExoMiner learns from previously confirmed exoplanets along with false-positive cases.

“Unlike other exoplanet-detecting machine learning programs, ExoMiner isn’t a black box — there is no mystery as to why it decides something is a planet or not. We can easily explain which features in the data lead ExoMiner to reject or confirm a planet. These 301 discoveries help us better understand planets and solar systems beyond our own, and what makes ours so unique,” said Jon Jenkins, an exoplanet scientist at NASA’s Ames Research Center in California’s Silicon Valley.

The major difference between a confirmed exoplanet and a validated exoplanet is that, with the help of different observations techniques, the features are revealed that can only be explained by the planet, then the planet is “confirmed”. On the other hand, a planet is considered as “validated” with the use of statistics — that means how likely or unlikely it is to be a planet based on the data provided.

“When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling,” said Hamed Valizadegan, ExoMiner project lead, and machine learning manager with the Universities Space Research Association at Ames.

Coming back to the 301 unknown and newly discovered planets, they have been considered as machine-validated planets and detected by the Kepler Science Operations Center pipeline and added to planet candidate status only with the help of ExoMiner.

This is what ExoMiner is. Now, you know about the ExoMiner, let’s see its architecture.

The methodology behind ExoMiner Model

Machine Classification

Since the model belongs to supervised learning in Machine learning, identifying a class label by using a pre-labeled training set to train the model is known as machine classification. A favorable model should have the ability to learn effectively from the training datasets and work well on unseen data.

Machine Classification

The first term from the RHS term is called bias. It is the difference between the average prediction of the model learned from different training sets and the true prediction. High bias is not good for the learner since it learns little from the training set and has poor performance on training as well as test sets. It’s also called underfitting. The second term on RHS in the equation above is variance. High variance leads to overtraining of models on the provided training set. Hence, it performs well on the training set but badly on the test set. It’s called overfitting.

A good learner needs to work well between bias and variance. A good understanding of the relation between the input data and output variables is required from the training set. A trade-off between two sources of error: bias and variance is required for a well-supervised learning algorithm.

Fig. 1. Main Elements of Neural Network of ExoMiner

A Neural Network consists of many neurons, the non-linear processing units. A neuron calculates a weighted sum of all its inputs and then passes it to a nonlinear function, called an activation function, which generates an output.

Fully Connected (FC) layers: The outputs of all neurons from the previous layer are received by a neuron in an FC(Fully connected) layer. Here, Parametric Rectified Linear Unit (PReLU) is used as an activation function for the neurons in the FC layer.

Dropout layers: It is designed to prevent overfitting and act as a form of regulation for it. In order to make the model training noisier and more robust, these layers drop the percentage of processing neurons in the previous layer.

One dimensional (1-d) convolutional layers: The neurons in convolutional layers, which are called filters, operate only on a small portion of the output of the previous layer. These neurons extract features using small the small connection of regions of the input. Temporal and spatial data is their main priority where there is the locality information between inputs.

Pooling layers: In order to downsample feature maps generated by the convolutional layers, pooling layers are used.

In order to adapt and scale the DNN model across all domains and industries, explainability persists as a challenge. Explaining the processes attempted by the model in order to clarify the inner workings of the learning model is important.

The scientists have utilized the domain knowledge in order to preprocess and build representative data to feed the DNN model as well as used it to decide the general architecture. The unique components of the Data Validation report were used to provide them as inputs to the DNN model. These included full-orbit and transit-view flux data, stellar parameters, and other essential components. Scalar values that were each test were also used to see if the test is effective. Some of the scalar values are:

i) Secondary event scalar features

ii) Centroid motion scalar features

iii) Centroid motion

Proposed DNN Architecture of ExoMiner

In order to understand a specific type of DNN model, one needs to understand the two following sets of parameters:

  1. General DNN architecture-related parameters consist of how input is fed to the model, how many blocks and layers there are in the model, and the type of layers and filters used in the model.
  2. The particular variable weights are used between neurons.

The parameters in the first set are decided either by domain knowledge or hyper-parameter optimization tools or both used together. ExoMiner DNN model uses domain knowledge to decide input data to feed DNN and how its general architecture would look like. We can further simplify the DNN architecture by explaining it with the below diagrams.

Fig. 2. ExoMiner Architecture
Fig. 3. a) Each convolutional branch, b) convolutional blocks, c) FC blocks

In (figure 2) we can see how and which data are fed to the Deep Neural Network. The input data are Full-orbit flux, Transit-view flux, Full-orbit centroid, Transit view centroid, Transit-view odd & even flux, transit-view secondary Eclipse are fed as an input data stream, each to a separate convolutional branch (in figure 3. a) which itself has formed of N number of blocks and each block, in its turn, is having M number of convolutional layers (figure 3. b) plus a Max pooling layer at the end. Some scalar values are also included in the test process of the model to ensure its effectiveness. They are transit depth, centroid scalars, secondary scalars. For example, transit depth is crucial to understanding the size of the potential planet because the information is lost during the normalization process of the flux data. Later on, these scalars and extracted features are combined by FC layers (figure 3. b) to produce the final sets of features. The final sets of features, which are produced by FC layers, are added to other scalar values ( stellar parameters and DV diagnostic scalars) and fed to an FC block (figure 2) which extracts useful, high-level information from all these diagnostic test-specific features. The final stage is to feed the result of the above operations to a logistic sigmoid function to produce an output of zero and one which determines that the input is a planet candidate.

Performance Evolution

The performance of different classifiers was studied by the researchers. The experimentation proves that ExoMiner has lower variability as compared to others. Hence, the ExoMiner is more stable in terms of the varying data sets.

Fig. 4. Precision vs Recall

It can be seen from the above precision-recall curve that ExoMiner surpasses all the other models in terms of the baselines set for comparison. ExoMiner has high recall values at high precision values, which is of relevance during the validation of new exoplanets. At a fixed precision value of 0.99, ExoMiner is able to retrieve 93.6% of all exoplanets in the test set of N Kepler Q1-Q17 DR25 data.

Future Direction to the ExoMiner

After this successful work, NASA says that there are number of future directions for improving the performance of ExoMiner. In future, NASA is going to change the ExoMiner’s architecture and will include new important features, such as —

  1. the number of transits and the uncertainty about the depth of the odd and even views to ensure that the ExoMiner model’s use of the odd & even branch is more effective
  2. the Kepler magnitude to allow the model to recognize saturated stars for which the centroid test is invalid.
  3. perform a full explainability study by utilizing more advanced explainability techniques and design a framework that can provide experts with insight about ExoMiner’s labels and scores.

Conclusion

In this article, we have learned about Exoplanets, ExoMiner DNN which is highly accurate NN to confirm and validate new Exoplanets, its architecture, and NASA’s new discovery about 301 unknown Exoplanets. Hope you have liked this article!

References

Check out the Official Paper of NASA on ExoMiner here.

Co-Authors: Pradunya Maladhari, Talib Hussain, and Gitanjali Suryawanshi

--

--

Responses (3)