Süni İntellekt Mühəndisliyi (AI Engineer)

Süni intellekt kursu ilə texnologiya dünyasına güclü başlanğıc et! Təlimdə AI əsasları, machine learning və proqram təminatının test edilməsi üzrə biliklər və real layihələrlə təcrübə qazan!

Kurs Haqqında

  • Understanding the Problem & Inspecting Raw Data
    • 1 Framing the Business / Research Question
    • 2 Clarifying objectives
    • 3 Defining target variable
    • 4 Success metrics
    • 5 Importing Data Sources
    • 6 Flat files (CSV, Excel)
    • 7 SQL queries
    • 8 APIs
    • 9 First-Look Inspection in Pandas
    • 10 head(), info(), describe(), data types, memory usage
    • 11 Define a clear business or research question
    • 12 Data Structure Concepts
    • 13 Acquire and clean the data, then explore it to understand its patterns
    • 14 Rectangular vs. nested data
    • 15 Engineer features and select appropriate models from those studied (e.g. linear models, trees, boosting, SVMs, etc.)
    • 16 Tidy-data principles
    • 17 Train, tune, and evaluate their models using best practices (cross-validation, learning curves, proper metrics)
    • 18 Indexes
    • 19 Package their workflow into a reproducible pipeline and deliver a concise report summarizing their approach, findings, and lessons learned
    • 20 Quick Data Quality Scan
    • 21 Missing-value counts
    • 22 Duplicated rows
    • 23 Identifying impossible values
  • Project Delievery and Interview
  • Handling Missing Data & Outliers
    • 1 Diagnosing Missingness
    • 2 MCAR, MAR, MNAR
    • 3 Visual tools (missingno, heatmaps)
    • 4 Imputation Strategies
    • 5 Mean / median
    • 6 Constant
    • 7 Flag columns
    • 8 Outlier Detection
    • 9 Z-score
    • 10 IQR rule
    • 11 Tukey fences
    • 12 Domain-based limits
    • 13 Robust Rescaling
    • 14 Winsorization
    • 15 log / Box-Cox transforms to tame skew
    • 16 Practical Guidelines
    • 17 When to drop vs. impute
    • 18 Documenting every assumption
  • Exploring Data Characteristics
    • 1 Descriptive Statistics Deep-Dive
    • 2 Mean, median, mode, trimmed mean
    • 3 MAD
    • 4 CV
    • 5 Distribution Analysis
    • 6 Histograms, KDE, QQ-plots
    • 7 Detecting skew & kurtosis
    • 8 Visual Exploration of Categorical Data
    • 9 Frequency tables
    • 10 Bar charts
    • 11 Pareto plots
    • 12 Relationships in Numeric Data
    • 13 Scatter-matrix
    • 14 Correlation heatmap
    • 15 Pair-plot diagnostics
    • 16 Initial Feature Insights
    • 17 Signals
    • 18 Redundancies, and surprises worth engineering later
  • Data Transformation & Feature Preparation
    • 1 Encoding Categorical Variables
    • 2 One-hot and Ordinal
    • 3 Target & frequency encoding
    • 4 Pitfalls of high-cardinality
    • 5 Scaling & Normalization
    • 6 StandardScaler, Min-Max, RobustScaler
    • 7 Choosing per algorithm
    • 8 Feature Generation Basics
    • 9 Date-time parts, qggregation, ratios, interaction terms
    • 10 Pipeline Construction in scikit-learn
    • 11 ColumnTransformer
    • 12 Custom transformers
    • 13 Reproducible workflows
  • (Lab). EDA & Cleaning
    • 1 Load a real-world dataset with mixed types
    • 2 Perform systematic missing-value treatment and outlier fixes
    • 3 Produce distribution and relationship visualizations
    • 4 Build a preprocessing pipeline that outputs a model-ready table
    • 5 Present a short notebook summarizing insights and cleaned data quality metrics
  • Machine Learning
    • 1 Linear Models I - Simple & Multiple Regression
    • 2 Regression Fundamentals
    • 3 Hypothesis & ordinary least squares (OLS)
    • 4 Cost computation: MSE & Normal equation
    • 5 Multiple Regression
    • 6 Vectorized implementation
    • 7 Impact of feature scaling on solvers
    • 8 Model Evaluation
    • 9 Metrics: MSE, RMSE, R2
    • 10 Cross-validation overview
  • Linear Models II - Polynomial & Interaction
    • 1 Polynomial Regression
    • 2 Feature expansion with PolynomialFeatures
    • 3 Selecting polynomial degree
    • 4 Effect on model capacity
    • 5 Bias–Variance Trade-Off
    • 6 Underfitting vs. overfitting symptoms
    • 7 Impact of model complexity
    • 8 Learning Curves
    • 9 Plot training vs. validation error
    • 10 Diagnosing capacity issues
  • Regularized Regression — Ridge, Lasso, Elastic Net
    • 1 Why Regularize?
    • 2 Over-fitting symptom review
    • 3 Shrinkage concept & coefficient paths
    • 4 Ridge Regression (L2)
    • 5 Cost function with λ |w|²
    • 6 Effect on correlated features
    • 7 Closed-form vs. SGD implementation
    • 8 Lasso Regression (L1)
    • 9 Sparse solutions & feature selection
    • 10 Geometry: diamond constraint region
    • 11 When to prefer over Ridge
    • 12 Elastic Net
    • 13 Mixing L1 & L2 penalties
    • 14 Tuning α and l1_ratio with grid-search CV
  • Logistic Regression
    • 1 Logistic Regression
    • 2 Sigmoid activation & probability estimates
    • 3 Cross-entropy loss & gradient descent
    • 4 Naïve Bayes
    • 5 Bayes’ theorem with feature-independence
    • 6 Gaussian vs. Multinomial variants
    • 7 Model Evaluation
    • 8 ROC curve & AUC
    • 9 Precision, recall & F1-score
  • Regression & Classification Lab
    • 1 Hands-On Regression
    • 2 Fit linear & polynomial models
    • 3 Experiment with ridge & lasso regularization
    • 4 Hands-On Classification
    • 5 Train logistic & Naïve Bayes classifiers
    • 6 Evaluate with ROC & precision/recall
  • K-Nearest Neighbors
  • Support Vector Machines
    • 1 Slack variables & C parameter
    • 2 Kernel Trick
    • 3 Linear, polynomial & RBF kernels
    • 4 Implicit feature mapping
    • 5 Model Tuning
    • 6 C & γ trade-offs
    • 7 Grid search & scaling requirements
  • SVM Project
    • 1 Hyperparameter Tuning
    • 2 GridSearchCV over C ([0.1,1,10]) and gamma ([0.01,0.1,1]).
    • 3 Retrain best model and report test accuracy.
    • 4 Data Prep
    • 5 Load a two-class dataset (e.g. make_moons).
    • 6 Split train/test and apply StandardScaler.
    • 7 Linear SVM
    • 8 Train LinearSVC.
    • 9 Evaluate accuracy, confusion matrix, classification report.
    • 10 Plot decision boundary.
    • 11 Kernel SVM
    • 12 Train SVC(kernel='rbf') and SVC(kernel='poly', degree=3).
    • 13 Evaluate & plot each.
  • Decision Trees
    • 1 Decision Trees (title listed in PDF; detailed subtopics not specified on the same page).
  • Random Forests
    • 1 Bagging & Variance Reduction
    • 2 Bootstrap sampling
    • 3 Aggregating predictions
    • 4 Random Subspace Method
    • 5 Random feature selection per split
    • 6 Decorrelated trees
    • 7 Out-of-Bag Evaluation
    • 8 OOB error estimation
    • 9 Feature importance from OOB samples
    • 10 Tree Construction
    • 11 Splitting criteria: Gini vs. entropy
    • 12 Recursive binary splitting
    • 13 Tree Regularization
    • 14 Pre-pruning: max_depth, min_samples
    • 15 Post-pruning techniques
    • 16 Interpretability
    • 17 Feature importance
    • 18 Tree visualization
  • Tree Ensemble Methods Lab
    • 1 Decision Tree Practice
    • 2 Train & visualize a single tree
    • 3 Experiment with pruning parameters
    • 4 Random Forest Practice
    • 5 Fit RF model & measure OOB error
    • 6 Compute & plot feature importances
    • 7 Hyperparameter Tuning
    • 8 Grid search over n_estimators & max_features
  • Boosting
    • 1 Boosting Concepts
    • 2 AdaBoost: weighted resampling & exponential loss
    • 3 Gradient Boosting: additive trees & loss-gradient descent
    • 4 Key Hyperparameters
    • 5 n_estimators, learning_rate & max_depth
    • 6 Shrinkage vs. over-fitting trade-off
    • 7 XGBoost
    • 8 Tree regularization (L1/L2) & pruning
    • 9 Column subsampling & tree learning
    • 10 LightGBM
    • 11 Histogram-based binning & leaf-wise growth
    • 12 GPU & multi-threaded acceleration
    • 13 CatBoost
    • 14 Ordered boosting & target leakage prevention
    • 15 Built-in categorical feature handling
  • Anomaly Detection
    • 1 Isolation Forest
    • 2 Random partitioning principle
    • 3 Anomaly score via path length
    • 4 Key hyperparams: n_estimators, max_samples
    • 5 One-Class SVM
    • 6 Kernel-based novelty detection
    • 7 ν parameter & decision boundary
    • 8 Scale sensitivity
  • Boosting & Anomaly Detection Lab
    • 1 Gradient Boosting Practice
    • 2 Train & tune XGBoost and LightGBM
    • 3 Compare learning curves
    • 4 Anomaly Detection Practice
    • 5 Fit Isolation Forest & One-Class SVM
    • 6 Evaluate with ROC & precision-recall metrics
  • Clustering — K-Means & Hierarchical
    • 1 K-Means Clustering
    • 2 Centroid initialization & update steps
    • 3 Choosing k via elbow & silhouette methods
    • 4 Hierarchical Clustering
    • 5 Agglomerative vs. divisive approaches
    • 6 Linkage criteria: single, complete, average
    • 7 Dendrogram Interpretation
    • 8 Cutting trees for cluster assignment
  • Dimensionality Reduction — PCA
    • 1 Principal Component Analysis
    • 2 Covariance matrix & eigen decomposition
    • 3 Explained variance ratio
    • 4 Dimensionality Reduction Workflow
    • 5 Centering & scaling data
    • 6 Selecting number of components
    • 7 Applications & Visualization
    • 8 Preprocessing for ML & 2D/3D plotting
  • Recommendation Systems
    • 1 Introduction
    • 2 What & why they matter
    • 3 Key use cases (e-commerce, streaming, news)
    • 4 Collaborative Filtering
    • 5 Memory-based: user–user & item–item similarity
    • 6 Model-based: matrix factorization (SVD/ALS)
    • 7 Content-Based Filtering
    • 8 Item profiles (metadata) & user profiles
    • 9 Similarity via TF-IDF or embeddings
    • 10 Hybrid Systems
    • 11 Weighted & switching strategies
    • 12 Example: Netflix’s metadata + viewing history
  • Neural Networks and Deep Learning (AI)
    • 1 What is Artificial Intelligence?
    • 2 Differences between AI, ML, and Deep Learning
    • 3 Modern AI systems and their capabilities
    • 4 Overview of Neural Networks
    • 5 Biological inspiration and historical development
    • 6 Discussion: "Can machines really think?"

Hardan başlamalısan bilmirsən?

Hardan başlamalı olduğunu bilmirsən?

Bizimlə birbaşa əlaqə: (+994 10) 234 65 56

  • Adress
  • Cəfər Cabbarlı küç. 609, Bakı / Globus Center

  • © 2014-2026 Orient Academy

  • Social network