Introduction
Agriculture is more than just a job for many Indians; it’s a way of life. It’s the means through which they support their livelihoods and contribute immensely to India’s economy. Determination of the type of soil that has the clay, sand, and silt particles in the respective proportions is important for suitable crop selection and identifying the weed’s growth. Discover the potential of Deep Learning in Agriculture. Understand the importance of soil types and weed detection for India.
Deep learning is an emerging technology that is helpful in every field. Deep learning has been popularly applied in smart agriculture across scales for field monitoring, field operation, robotics, predicting soil, water, climate conditions, and landscape-level land and crop types monitoring. We can feed the photo of soil to a deep learning architecture, guide it to learn to detect features, and then use the deep learning architecture to classify the soil.
In this blog, we will discuss the importance of soil in agriculture. We will classify soil using machine learning and deep learning models.
Learning Objectives
- You will understand how important soil is in agriculture.
- You will learn how machine learning algorithms can classify soil types.
- You will implement a deep learning model in agriculture to classify soil types.
- Explore the concept of multi-stacking ensemble learning to increase the accuracy of our predictions.
This article was published as a part of the Data Science Blogathon.
The Role of Soil in Agriculture
Organic matter, minerals, gases, liquids, and other substances excreted from plants and animals form important soil, a foundation for agriculture. The foundation of agriculture lies in the gases, minerals, organic matter, and other substances that come from plants and animals, forming the soil system.
India’s economy purely relies on agriculture; the soil is important for the crops, and it leads to the development of unwanted weeds due to its fertility.
Moisture and temperature are the physical variables that impact the formation of pores and particles in the soil, affecting root growth, water infiltration, and plant emergence speed.
But mainly, the soil has particles of sand and clay. Amidst the prevalently available soil particles, clay is plentiful in the exploration site. Clay particle’s availability on the surface is due to the plentiful nutrition supplied. The peat and loam are hardly present. The clay-type soil is spacious in between, wherein the water is retained.
Dataset
Kaggle Link
Feature extraction is one of the main steps in building a good deep-learning model. It is important to identify features that may be necessary for building the machine learning algorithms. We will use the Mahotas library to extract Haralick features, which have spatial and texture information of the images.
We will use skimage library to convert images to grayscale and to extract Histogram of Gradient (HOG) features which are useful for object detection. Finally, we will concatenate the values of the features into an array and later use them in machine learning and deep learning algorithms.
import mahotas as mh
from skimage import color, feature, io
import numpy as np
# Function to extract features from an image
def extract_features(image_path):
img = io.imread(image_path)
gray_img = color.rgb2gray(img) # Converting image to grayscale
# Converting the grayscale image to integer type
gray_img_int = (gray_img * 255).astype(np.uint8)
# Extracting Haralick features using mahotas
haralick_features = mh.features.haralick(gray_img_int).mean(axis=0)
# Extracting Histogram of Gradients (HOG) features
hog_features, _ = feature.hog(gray_img, visualize=True)
# Printing the first few elements of each feature array
print("Haralick Features:", haralick_features[:5])
print("HOG Features:", hog_features[:5])
# Concatenating the features into a single array
all_features = np.concatenate((haralick_features, hog_features))
return all_features
image_path="/kaggle/input/soil-classification-dataset/Soil-Dataset/Yellow Soil/20.jpg"
features = extract_features(image_path)
print("Extracted Features:", features)
Machine Learning Algorithms in Soil Classification
Now, let’s build a machine-learning model using the soil images we got from the Kaggle.
First, we will import all the libraries and then build a function named extract_features to extract features from images. The images are then imported and processed, which includes converting to grayscale, and then we get these features. Then, after features are extracted for each image, labels are encoded using Label Encoder.
import os
import numpy as np
import mahotas as mh
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier, StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score, classification_report
from skimage import color, feature, io
# Function to extract features from an image
def extract_features(image_path):
img = io.imread(image_path)
gray_img = color.rgb2gray(img) # Converting image to grayscale
gray_img_int = (gray_img * 255).astype(np.uint8)
haralick_features = mh.features.haralick(gray_img_int).mean(axis=0)
hog_features, _ = feature.hog(gray_img, visualize=True)
hog_features_flat = hog_features.flatten() # Flattening the HOG features
# Ensuring both sets of features have the same length
hog_features_flat = hog_features_flat[:haralick_features.shape[0]]
return np.concatenate((haralick_features, hog_features_flat))
data_dir = "/kaggle/input/soil-classification-dataset/Soil-Dataset"
image_paths = []
labels = []
class_indices = {'Black Soil': 0, 'Cinder Soil': 1, 'Laterite Soil': 2,
'Peat Soil': 3, 'Yellow Soil': 4}
for soil_class, class_index in class_indices.items():
class_dir = os.path.join(data_dir, soil_class)
class_images = [os.path.join(class_dir, image) for image in os.listdir(class_dir)]
image_paths.extend(class_images)
labels.extend([class_index] * len(class_images))
# Extracting features from images
X = [extract_features(image_path) for image_path in image_paths]
# Encoding labels
le = LabelEncoder()
y = le.fit_transform(labels)
# Splitting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Initializing and training a Random Forest Classifier
rf_classifier = RandomForestClassifier(n_estimators=100, random_state=42)
rf_classifier.fit(X_train, y_train)
# Making predictions
y_pred_rf = rf_classifier.predict(X_test)
# Evaluating the Random Forest model
accuracy_rf = accuracy_score(y_test, y_pred_rf)
report_rf = classification_report(y_test, y_pred_rf)
print("Random Forest Classifier:")
print("Accuracy:", accuracy_rf)
print("Classification Report:n", report_rf)
Deep Neural Networks
It works based on computation units and the number of neurons. Each neuron accepts inputs and provides output. It is used to increase accuracy and make better predictions, while machine learning algorithms rely on interpreting the data, and decisions will be made based on them.
Also Read: An Introductory Guide to Deep Learning and Neural Networks
Now, let’s build the model defined using Sequential API from Keras. This model will have a Conv2D convolution layer, MaxPooling2D, a flattening layer Flatten, and dense layers Dense.
Finally, the model is compiled using the Adam optimizer and categorical cross-entropy loss.
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
data_dir = "/kaggle/input/soil-classification-dataset/Soil-Dataset"
# Setting up data generators
batch_size = 32
image_size = (224, 224)
# Using image_dataset_from_directory to load and preprocess the images
train_dataset = image_dataset_from_directory(
data_dir,
labels="inferred",
label_mode="categorical",
validation_split=0.2,
subset="training",
seed=42,
image_size=image_size,
batch_size=batch_size,
)
validation_dataset = image_dataset_from_directory(
data_dir,
labels="inferred",
label_mode="categorical",
validation_split=0.2,
subset="validation",
seed=42,
image_size=image_size,
batch_size=batch_size,
)
# Displaying the class indices
print("Class indices:", train_dataset.class_names)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(len(train_dataset.class_names), activation='softmax')
])
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['accuracy'])
# Training the model
epochs = 10
history = model.fit(train_dataset, epochs=epochs, validation_data=validation_dataset)
import numpy as np
from tensorflow.keras.preprocessing import image
# Function to load and preprocess an image for prediction
def load_and_preprocess_image(img_path):
img = image.load_img(img_path, target_size=image_size)
img_array = image.img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
img_array /= 255.0
return img_array
image_path="/kaggle/input/soil-classification-dataset/Soil-Dataset/Peat Soil/13.jpg"
new_image = load_and_preprocess_image(image_path)
# Making predictions
predictions = model.predict(new_image)
predicted_class = np.argmax(predictions[0])
# Getting the class label based on the class indices
class_labels = {0: 'Black Soil', 1: 'Cinder Soil', 2: 'Laterite Soil',
3: 'Peat Soil', 4: 'Yellow Soil'}
predicted_label = class_labels[predicted_class]
# Displaying the prediction
print("Predicted Class:", predicted_class)
print("Predicted Label:", predicted_label)
As you can see, the predicted class is 0, which is Black Soil. So, our model is classifying the type of soil correctly.
Proposed Multi-stacking Ensemble Learning Model Architectures
The StackingClassifier is initialized with the baseClassifiers and a logistic regression meta-classifier final_estimator. This combines the outputs of the baseClassifiers to make a final prediction. Then, after training and predicting, accuracy is calculated.
base_classifiers = [
('rf', RandomForestClassifier(n_estimators=100, random_state=42)),
('knn', KNeighborsClassifier(n_neighbors=5)),
('svm', SVC(kernel="rbf", C=1.0, probability=True)),
('nb', GaussianNB())
]
# Initializing the stacking classifier with a logistic regression meta-classifier
stacking_classifier = StackingClassifier(estimators=base_classifiers,
final_estimator=LogisticRegression())
# Training the stacking classifier
stacking_classifier.fit(X_train, y_train)
# Making predictions with Stacking Classifier
y_pred_stacking = stacking_classifier.predict(X_test)
# Evaluating the Stacking Classifier model
accuracy_stacking = accuracy_score(y_test, y_pred_stacking)
report_stacking = classification_report(y_test, y_pred_stacking)
print("nStacking Classifier:")
print("Accuracy:", accuracy_stacking)
print("Classification Report:n", report_stacking)
Conclusion
Soil is an important element in yielding a good crop. Knowing which soil type is necessary to produce that specific crop is important. So, classifying the type of soil becomes important. Since manually classifying the type of soil is a time-consuming task, hence using deep learning models to classify them becomes easy. There are many machine learning models and deep learning models to implement this problem statement. Choosing the best one depends on the quality and amount of the data present in the dataset and the problem statement at hand. Another way to choose the best algorithm is by evaluating each. We can do that by measuring the accuracy, by how much they can correctly classify the soil. Finally, we implemented a Multi-Stacking ensemble model, using multiple models to build the best model.
Key Takeaways
- For effective crop selection, one should understand the soil completely.
- Deep learning in agriculture is a powerful tool, from predicting plant disease to soil types and water needs.
- We have done feature extraction to get features from soil images.
- In this blog, we explored machine learning and deep learning models for classifying soil and a Multi-stacked ensemble model for improved accuracy.
Frequently Asked Questions
A. It is important for suitable crop selection and identifying weed growth.
A. Features, including sand, clay, silt, peat, and loam, are considered.
A. Deep learning allows the model to make intelligent decisions, while traditional machine learning makes decisions by interpreting data.
A. The Multi-Stacking ensemble model increases the accuracy of classifying soil type.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
By Analytics Vidhya, January 17, 2024.