IEEE CSITSS 2025 Pages 1-6 Document 11295090

Research report for the hybrid AI and cloud ERP paper

This page restates the published work in a repository-native format: the research problem, architecture, experimental setup, quantitative results, module-level attention analysis, and repository evidence that supports each claim.

Main portal Implementation guide
Abstract

Research motivation and objective

Motivation

Traditional ERP systems are operationally central but often rigid, difficult to scale cleanly, and weak at producing real-time insight when module dependencies become complex.

The paper argues that AI and cloud computing together provide a better foundation: AI for predictive automation and dependency modeling, cloud for scalability and infrastructure elasticity.

Objective

The proposed framework uses a BiLSTM-Attention architecture to evaluate ERP module relationships and support modular analysis in cloud-hosted enterprise systems.

The reported intent is better system efficiency, resource optimization, and intelligent decision support without changing the core meaning of the published work.

Methodology

Dataset, preprocessing, and architecture evidence

This section focuses on what the local paper bundle directly supports: dataset description, preprocessing rules, model architecture, and hyperparameters.

Workflow image showing data collection, preprocessing, AI model training, cloud computing integration, and evaluation.
Published workflow figure extracted from the paper bundle.

Method summary

  • Dataset: ERP Module-Table Dependency Dataset referenced in the paper.
  • Features: module codes, table counts, dependency counts, and complexity score.
  • Preprocessing: missing-value handling, categorical encoding, normalization, and sequence construction.
  • Model: BiLSTM hidden states combined with attention-weighted context for final prediction.
BiLSTM-Attention architecture diagram.
Architecture figure from the paper.

Hyperparameter configuration

Parameter Value
Embedding Dimension128
LSTM Hidden Size128
Attention Vector Size64
Dropout Rate0.3
Learning Rate0.001
OptimizerAdam
Epochs50
Batch Size32
Loss FunctionCategorical Crossentropy
Results

Quantitative findings and figure evidence

Model comparison

The BiLSTM-Attention model is the top performer across accuracy, precision, recall, and F1-score. Random Forest is the strongest traditional baseline, and GRU is the strongest non-proposed sequence baseline.

The paper emphasizes that the proposed architecture improves not only peak accuracy but also consistency across evaluation metrics.

Training behavior

The published figures show a steady improvement in training and validation accuracy and a simultaneous reduction in both training and validation loss.

Because the raw logs are absent, this repository keeps the original images and separately provides a clearly labeled reconstructed curve series for interactive use.

Training and validation accuracy curve from the paper.
Original training-accuracy figure from the paper source bundle.
Training and validation loss curve from the paper.
Original training-loss figure from the paper source bundle.
Bar chart of ERP module attention scores from the paper.
Attention chart included with the paper bundle.
Cloud ERP microservice integration diagram.
Alternative cloud ERP integration visual supplied in the original figure set.
Tables

Exact numeric tables from the paper

Baseline model performance

Model Precision Recall F1-score Accuracy
Logistic Regression84.079.581.782.7
Random Forest87.583.085.285.3
Support Vector Machine80.277.078.679.4
Gated Recurrent Unit85.086.585.783.2
BiLSTM-Attention89.590.890.191.2

ERP module attention weights

Module Score Interpretation
Module_10.07Low importance; likely non-critical table
Module_20.09Moderate relevance
Module_30.05Minimal impact
Module_40.13High dependency weight
Module_50.17Most influential; likely a central or shared table
Module_60.11Strong contextual role
Module_70.04Negligible attention; possibly isolated module
Module_80.12Important secondary dependency
Module_90.10Supportive of prediction context
Module_100.12High final-context relevance
Limitations

What the repository cannot verify directly

Missing raw artifacts

  • The original ERP dataset is referenced but not bundled in the repository.
  • Per-epoch logs are not present, so reconstructed interactive training charts are marked as such.
  • No runnable training notebook or deployment code is included in the source bundle.

Cloud deployment scope

The paper explicitly discusses a cloud-native ERP integration pathway, but not a completed real-world production deployment. This repository preserves that boundary instead of overstating operational evidence.

Publication metadata on this site follows the IEEE indexing record. The original LaTeX bundle still contains an outdated first-page header from an earlier template.