By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
DatadanceDatadance
  • Home
  • News
  • Applications
  • Companies
  • Industries
  • Videos
  • More
    • Machine Learning
    • Legal & Ethics
    • Deep Learning
    • Community
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2023 Datadance. All Rights Reserved.
Reading: Adversarial Validation- Improving Ranking in Hackathon
Share
Sign In
Notification Show More
Latest News
Automated system teaches users when to collaborate with an AI assistant
News
Google’s Gemini Is the Real Start of the Generative AI Boom
ChatGPT
AI multi-speaker lip-sync has arrived
Companies
MIT engineers develop a way to determine how the surfaces of materials behave
News
Omid Scobie’s book Endgame sold 6,448 copies in its first five days
ChatGPT
Aa
DatadanceDatadance
Aa
  • News
  • Applications
  • Companies
  • Industries
  • Machine Learning
  • Videos
Search
  • Home
  • News
  • Applications
  • Companies
  • Machine Learning
  • Deep Learning
  • Industries
  • Legal & Ethics
  • Videos
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
© 2023 Datadance. All Rights Reserved.
Datadance > Blog > Deep Learning > Adversarial Validation- Improving Ranking in Hackathon
Deep Learning

Adversarial Validation- Improving Ranking in Hackathon

News Room
Last updated: 2023/02/27 at 4:35 AM
News Room
Share
10 Min Read
SHARE

Introduction

Often while working on predictive modeling, it is a common observation that most of the time model has good accuracy for the training data and lesser accuracy for the test data. While this is a usual observation for most machine learning problem statements, if the difference between the training and test accuracy is large, it means that the model is overfitting the training data.

Contents
IntroductionTable of ContentsWhat is Adversarial Validation?Adversarial Validation in ActionModule 1: Identifying if test and train datasets are similar.Module 2: How to create a better validation set – when the test and train datasets are differentConclusion

There can be multiple reasons for overfitting–

  1. The model has learned patterns from the random noise in the training data.
  2. The training and test data are from different time periods. e.g., imagine that you are building a model to identify credit card fraud. Training data has transactions from 2000 to 2019, and Test data contains transactions that happened in 2020 and 2021.
  3. The training and test data are having different feature values. e.g., Imagine that you are building a model to predict retail sales. Training data contains transactions in European countries, and Test data contains transactions in Asian countries.

If you create the training and test data yourself – you can control and minimize such instances.

However, when you participate in a hackathon, you are usually given 2 datasets – training and test dataset. If the hackathon involves supervised learning, you will also have the training data labels but not the test data.

It may happen that the training data are from different time periods or have different feature values. In such situations, if you build a model using training data and apply it to test data, you may see an accuracy gap between the datasets.

One will say that we can use cross-validation to prevent such gaps. However, cross-validation will take examples or samples from the training data itself, and this issue will still persist.

So, we need to use some other way to identify such trends. Here comes adversarial validation to help us.

This article was published as a part of the Data Science Blogathon.

Table of Contents

  1. What is Adversarial Validation?
  2. Adversarial Validation in Action

What is Adversarial Validation?

Adversarial Validation is a smart yet simple method to identify the similarities between the training and the test dataset. It uses a simple logic – If a binary classifier model is able to differentiate between training and test samples, it means that there is a dissimilarity between the training and the test data.

It involves below basic operations:

  1. Here, we drop the actual target column from the training dataset.
  2. Create a label column in both datasets (0 for the train data and 1 for the test data or vice versa).
  3. Then we combine the training and the test dataset.
  4. Now, we use a binary classifier to see if we are able to differentiate between the training and test samples.
  5. Now, we evaluate the AUC ROC score, i.e., the Area Under the Curve for Receiver Operating Characteristic Graph.

If the AUC ROC score is ~0.5, it means that the test and training data are similar.
If the AUC ROC score is >0.5, it means that the test and training data are not similar.

Adversarial Validation in Action

Module 1: Identifying if test and train datasets are similar.

1. Download Dataset and import libraries

We will download the titanic dataset from here:

https://www.kaggle.com/c/titanic

import pandas as pd
from sklearn.ensemble import RandomForestClassifier

2. Load only numeric features

Python Code:

3. Drop the target feature and create a label column (0 for the train data and 1 for the test data).

# drop the target column from the training data
X_train = X_train.drop(['Survived'], axis=1)

print(X_train.shape)
print(X_test.shape)

# add the train/test labels
X_train["Adv_Val_label"] = 0
X_test["Adv_Val_label"]  = 1

4. Concatenate and shuffle the data

# make one big dataset
all_data = pd.concat([X_train, X_test], axis=0, ignore_index=True)

# shuffle
all_data = all_data.sample(frac=1)

5. Train a Random Forest Model

forest = RandomForestClassifier(random_state=42,max_depth=2,class_weight="balanced")

X = all_data.drop(['Adv_Val_label'], axis=1).fillna(-1)
y = all_data['Adv_Val_label']

clf = RandomForestClassifier(random_state=42).fit(X, y)

6. Look at the roc-auc and Investigate

from sklearn.metrics import roc_auc_score
auc_score = roc_auc_score(y, clf.predict_proba(X)[:,1])
print(auc_score)

Output: 1.0

Here, the ROC score is 1. This means that the model is able to differentiate between the training and test samples completely.

Let us also look at the Feature Importance. This will help to understand which features are driving the predictions.

feature_imp_random_forest = pd.DataFrame({
    'Feature':list(X.columns),
    'RF_Score':list(clf.feature_importances_)
})
feature_imp_random_forest = feature_imp_random_forest.sort_values(by='RF_Score',ascending=False)
feature_imp_random_forest
Adversarial Validation

Upon looking at the Feature Importance, we see that ~97.5% of the importance is due to the PassengerId column.

Let us remove that column and retrain the model.

7. Remove the column and retrain

train_data = pd.read_csv("train.csv")
test_data  = pd.read_csv("test.csv")

# select only the numerical features
X_test  = test_data.select_dtypes(include=['number']).copy()
X_train = train_data.select_dtypes(include=['number']).copy()

# drop the target column from the training data
X_train = X_train.drop(['Survived','PassengerId'], axis=1)
X_test = X_test.drop(['PassengerId'], axis=1)

# add the train/test labels
X_train["Adv_Val_label"] = 0
X_test["Adv_Val_label"]  = 1

# make one big dataset
all_data = pd.concat([X_train, X_test], axis=0, ignore_index=True)

# shuffle
all_data = all_data.sample(frac=1)

X = all_data.drop(['Adv_Val_label'], axis=1).fillna(-1)
y = all_data['Adv_Val_label']

clf = RandomForestClassifier(random_state=42,max_depth=2,class_weight="balanced").fit(X, y)

auc_score = roc_auc_score(y, clf.predict_proba(X)[:,1])
print(auc_score)

Output: 0.6214792797727406

Now for the same hyper-parameters, the ROC score is ~0.62

The score has reduced, meaning it is now harder for the model to distinguish between the training and test datasets. Let us also look at the Feature Importance. This will help to understand which features are driving the predictions

feature_imp_random_forest = pd.DataFrame({
    'Feature':list(X.columns),
    'RF_Score':list(clf.feature_importances_)
})
feature_imp_random_forest = feature_imp_random_forest.sort_values(by='RF_Score',ascending=False)
feature_imp_random_forest

Output:

Adversarial Validation

The most important feature is Fare (having ~34.4% importance), followed by Age (having ~27.2% importance), and so on. The feature importance is not biased, unlike the previous run.

Now let us understand how to handle the case when the train and test data differ.

Module 2: How to create a better validation set – when the test and train datasets are different

Here, we will use the same dataset and the model we created in Step 7. Since the ROC score is ~0.62, it means that the test and training data are not similar. So, we need to create a validation set from the original training data that is most similar to the test data. Let us call it adversarial_validation_data

Step 1: The model in step 7 can be used to predict the probability of being a test sample.

# Take prob and identify most similar instances to test data
X_new = X.copy()
X_new['proba'] = clf.predict_proba(X)[:,1]
X_new['target'] = y

Step 2: Remove the original test dataset from this data.

X_new = X_new[X_new['target']==0]

Step 3: Sort the data in descending order of probability and pick the top 20% of samples. This would mean that we are selecting samples from the data that are more similar to the test data. Let us call it adversarial_validation_data, and the remaining data will be adversarial_training_data.

nrows = X_new.shape[0]
adversarial_validation_data = X_new.sort_values(by='proba',ascending=False)[:int(nrows*.2)]
adversarial_training_data = X_new.sort_values(by='proba',ascending=False)[int(nrows*.2):]

Now, we can train a machine learning model using adversarial_training_data and optimize its accuracy on the adversarial_validation_data. The accuracy that you get on adversarial_validation_data will be closer to the actual test data.

Conclusion

Adversarial Validation is a clever and simple method for determining whether our test data and training data are similar; we combine our train and test data, labeling them with a 0 for the training data and a 1 for the test data, mix them up, and then see if we can correctly re-identify them using a binary classifier. In this article,

  1. we saw how to tackle overfitting and improve the leaderboard scores in a hackathon using adversarial validation.
  2. We first saw how adversarial validation helps identify whether the test and train datasets are similar or not.
  3. We also saw how to create a better validation set in case the test and train data differ.

Feel free to connect with me on LinkedIn if you want to discuss this with me.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Read the full article here

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
I have read and agree to the terms & conditions
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
News Room February 27, 2023
Share this Article
Facebook Twitter Copy Link Print
Share
Previous Article Anomaly Detection on Google Stock Data 2014-2022
Next Article Apple shies from the spotlight with staff-only AI summit
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

- Advertisement -
Ad imageAd image

Latest News

Google’s Gemini Is the Real Start of the Generative AI Boom
ChatGPT December 7, 2023
AI multi-speaker lip-sync has arrived
Companies December 7, 2023
MIT engineers develop a way to determine how the surfaces of materials behave
News December 7, 2023
Omid Scobie’s book Endgame sold 6,448 copies in its first five days
ChatGPT December 7, 2023
ChatGPT, Cristiano Ronaldo and Barbenheimer: Top 25 most viewed Wikipedia pages of 2023 give fascinating insight into what interested people around the globe this year
ChatGPT December 6, 2023
Eric Evans to step down as director of MIT Lincoln Laboratory
News December 6, 2023

You Might also Like

Deep Learning

Exploring Pointwise Convolution in CNNs: Replacing Fully Connected Layers

November 24, 2023
Deep Learning

Mastering LeNet: Architectural Insights and Practical Implementation

November 22, 2023
Deep Learning

A Deep Dive into Model Quantization for Large-Scale Deployment

November 17, 2023
Deep Learning

Scaling Down, Scaling Up: Mastering Generative AI with Model Quantization

November 10, 2023
//

Datadance is your one-top news website for the latest artificial intelligence news and updates, follow us now to get the news that matters to you!

Quick Link

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Top Topics

  • Applications
  • Companies
  • Deep Learning
  • Industries
  • Machine Learning

Sign Up for Our Newsletter

Subscribe to our newsletter to get our latest news instantly!

I have read and agree to the terms & conditions
DatadanceDatadance
Follow US

© 2023 Datadance. All Rights Reserved.

Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

I have read and agree to the terms & conditions
Zero spam, Unsubscribe at any time.

Removed from reading list

Undo
Welcome Back!

Sign in to your account

Register Lost your password?