Skip to content
Snippets Groups Projects
Commit f45449b5 authored by Christian Tilk's avatar Christian Tilk
Browse files

init

parents
No related branches found
No related tags found
No related merge requests found
Showing with 662 additions and 0 deletions
JP pres/Uni_Logo_blau.jpg

151 KiB

JP pres/aug1.png

67.7 KiB

JP pres/aug2.png

67.9 KiB

JP pres/aug3.png

73 KiB

JP pres/calnov2024.jpg

245 KiB

JP pres/cg.png

133 KiB

JP pres/cg1.png

16.1 KiB

JP pres/cg2.png

10.1 KiB

JP pres/corner2.png

88.8 KiB

JP pres/cornerroute.png

50.9 KiB

%----------------------------------------------------------------------------------------
% PACKAGES AND THEMES
%----------------------------------------------------------------------------------------
\documentclass[aspectratio=169,xcolor=dvipsnames]{beamer}
\usetheme{Madrid}
\usepackage{hyperref}
\usepackage{graphicx} % Allows including images
\usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage[format=plain,justification=center]{caption}
\usepackage{subcaption}
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
% The title
\title[Learning NG-Sets]{Learning the NG sets in Branch-and-Price strategies for routing problems}
\subtitle{}
\author[JP Cantillana] {Juan Pablo Cantillana}
\institute[Business Analytics] % Your institution may be shorthand to save space
{
\begin{center}
\includegraphics[width=6cm]{Uni_Logo_blau.jpg}
\end{center}
% Your institution for the title page
Department of Business Analytics \\
University of Vienna
\vskip 3pt
}
\date{\today} % Date, can be changed to a custom date
\AtBeginSection[]
{
\begin{frame}
\frametitle{Table of Contents}
\tableofcontents[currentsection]
\end{frame}
}
%----------------------------------------------------------------------------------------
% PRESENTATION SLIDES
%----------------------------------------------------------------------------------------
\begin{document}
\begin{frame}
% Print the title page as the first slide
\titlepage
\end{frame}
\begin{frame}{Overview}
% Throughout your presentation, if you choose to use \section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\tableofcontents
\end{frame}
%------------------------------------------------
\section{Research question}
%------------------------------------------------
% \begin{frame}{Research team}
% \end{frame}
%------------------------------------------------
\subsection{Column generation}
\begin{frame}{Column generation - Motivation}
\begin{itemize}
\item Sometimes when obtaining a solution from a Mixed Integer Problem is easy.
\item There are times when it's not the case, so reformulations must be thought.
\item Representation theorem allows enumerating extreme points and rays.
\item In many problems these points and rays can be obtained easily.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{corner2.png}
\caption{In the edge-based formulation, a route can be imagined as the corner of the shortest path hypercube as in the route $\{1,2,3,4,1\}$ on the left. Many routes can be vertices of the SP, and when multiple vehicles are allowed these can be aggregated and the solutions found as linear combinations of these corners/routes.}
\label{fig:cornerxij}
\end{figure}
\end{frame}
\begin{frame}{Column Generation}
\begin{itemize}
\item Column Generation (CG) exploits this ability.
\item We divide the problem in 2, where the constraints that define the corners (e.g. in routing, the arc-based routes) are going to be considered the Subproblem (SP), and the ones that evaluate how these are combined the Master or Main problem (MP).
\item The MP is solved using Simplex, can we reduce the number of (aggregated) variables?
\item We need to generate the reduced costs for solving the SP and produce variables in a coordinated way.
\end{itemize}
\begin{figure}[htb]
\centering
\centering
\includegraphics[width=0.25\linewidth]{cg1.png}
\includegraphics[width=0.25\linewidth]{cg2.png}
\caption{The method is called Column Generation because the MP is solved (using Simplex) in such a way that we will slowly introduce columns to the $A$ matrix to form the base, and out of the reduced costs (in the form of dual variables) we'll get the costs for solving the SP.}
\end{figure}
\end{frame}
%------------------------------------------------
\begin{frame}{Column generation in VRP}
\begin{itemize}
\item VRP are a family of problems where the difficulty grows fast in the number of stops, so we use tools such as Column Generation to solve them to optimality.
\item The route-specific constraints that form the Shortest Path can also be solved by a Label Setting algorithm.
\end{itemize}
\begin{alertblock}{Relevant}
Different constraint relaxations can be used in order to produce solutions in a cheaper way. Many concentrate on the Subtour Elimination Constraints, but this sacrifices the elementarity (absence of cycles) in the tours.
\end{alertblock}
\begin{examples}
Methods that explore that relaxation are e.g. Subset row cuts, k-cycle elimination, partial elementarity and NG-Path relaxation.
\end{examples}
\end{frame}
%------------------------------------------------
\begin{frame}{Column generation scheme}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{cg.png}
\caption{From Prof. Tilk's presentations. This scheme shows how Column Generation is used for routing problems.}
\label{fig:cornerxij}
\end{figure}
\end{frame}
%------------------------------------------------
\subsection{NG-Sets}
\begin{frame}{No-memory Label setting}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{memoryless.png}
\caption{We can produce easily routes, but if these contain cycles, we'll call these non-elementary (and infeasible). We can include a memory resource in the labeling for SP with resource constraints, but that's not so efficient. For this specific route, we should only remember node 1 but not node 4.}
\label{fig:memory}
\end{figure}
\end{frame}
\begin{frame}{NG-Sets}
\begin{itemize}
\item Ng-Path relaxation is a tool used for speeding-up elementarity in solutions obtained via SPPRC algorithm. These rely on the usage of Ng-Sets associated to each stop to compute a resource called the memory.
\item Usually it's defined over a vicinity of a given radius, or a set of the k-nearest neighbors.
\item The memory as a route resource is updated each time it's extended, but it's allowed to forget.
\item Also dominance checks on routes can be done by keeping the label with the smallest memory when all other resources are ceteris paribus.
\end{itemize}
\end{frame}
%------------------------------------------------
\begin{frame}{Learning the NG-Sets pic1}
\frametitle{Learning the NG-Sets}
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{newplot3.png}
\caption{Example of Ng-Set as a directed graph for a simple instance.}
\end{figure}
\end{frame}
\begin{frame}{Learning the NG-Sets pic2}
\frametitle{Learning the NG-Sets}
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{newplot2.png}
\caption{Example of Ng-Set as a directed graph. Expansion around node 37.}
\end{figure}
\end{frame}
%------------------------------------------------
\begin{frame}{Learning the NG-Sets}
\begin{itemize}
\item The methods to obtain these Ng-Sets are mostly parameter-reliant, and the question on which shall be used to obtain an elementary root solution in a branching scheme is still open.
\item The Augmented Ng-Set or \textit{AugNg-Set} is a tool to determine the natural size of these Ng-Sets, but can it be learned?
\item Obtaining the data until the solution obtained in the root node is elementary is not excessively costly
\end{itemize}
\begin{alertblock}{Research hypothesis}
The Ng-relaxation induced lower bound can be learned out of the AugNg-Sets obtained in the solution of the root node.
\end{alertblock}
\end{frame}
%------------------------------------------------
\section{Methods}
%------------------------------------------------
\subsection{Data acquisition}
\begin{frame}{Obtaining instances}
\begin{itemize}
\item Instances created should generate a range of sizes for NG-Sets, and difficulty levels to be solved
\item For comparison reasons, space is limited in the 100x100 square
\item Variable parameters considered include:
\begin{itemize}
\item Degree of centrality of depot
\item Width of time windows
\item Number of clusters of stops
\item Distribution of stops
\item Capacity of vehicles
\end{itemize}
\item These form scenario families from which we can sample. In total over 20,000 instances are generated.
\end{itemize}
\end{frame}
\begin{frame}{Obtaining NG-Sets}
\begin{itemize}
\item We limit the study to the Aug-Ng-Set generated in the root relaxation.
\item Each instance is solved in the root node.
\item The Augmentation is performed until the solution obtained is elementary.
\item The obtained set is then stored and can be used for classification tasks.
\end{itemize}
\begin{columns}[c]
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4cm]{aug1.png}
\caption{A non elementary column is generated.}
\end{figure}
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4cm]{aug2.png}
\caption{We detect it and add the conflicting node to the NG-sets of the cycle so it's not forgotten}
\end{figure}
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4cm]{aug3.png}
\caption{A new column is generated. As it's elementary, this provides no new information for augmentation.}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Obtaining Attributes}
\begin{itemize}
\item General attributes, such as instance-based number of vehicles and capacity, or stop-related such as demand, location, time windows and service times are included.
\item Additionally, engineered variables are considered, including:
\begin{itemize}
\item Distance to the element furthest away from the depot
\item Average length to depot
\item Average route length as $\frac{totalDemand}{Capacity}$
\item Euclidean distance from each stop to the depot
\item Amount of remaining time to visit a given stop
\item Amount of remaining capacity to visit a given stop.
\end{itemize}
\item On top of that, also Graph Encoding variables of the kNN induced graph can be considered.
\end{itemize}
\end{frame}
%------------------------------------------------
\subsection{Models}
\begin{frame}{Models used}
Models used to perform the learning task are:
\begin{itemize}
\item Random Forest with engineered variables
\item Deep classification head of GNN encoded variables
\item Homegeneous GNN to encode NG-Set
\item Heterogeneous GNN to encode NG-Set
\end{itemize}
Additional attention layers have been omitted due to computing capacity and simplified models' inability to learn connections.
\end{frame}
%------------------------------------------------
\begin{frame}{Random Forest Classifier Model}
\textbf{Dataset Split:}
\begin{itemize}
\item The dataset is split into training and test sets.
\item 80\% of the data is used for training, and 20\% for testing.
\item Stratification ensures balanced class distribution in train/test sets.
\end{itemize}
\textbf{Random Forest Classifier:}
\begin{itemize}
\item Uses log loss (cross-entropy) as the splitting criterion.
\item Limits the depth of each tree to 100 levels to prevent overfitting.
\item Each split considers at most 4 features, controlling tree diversity.
\end{itemize}
\end{frame}
%------------------------------------------------
\begin{frame}{Encoding-Decoding GNN for Edge Prediction}
\textbf{Graph Encoder:}
\begin{itemize}
\item \textbf{Graph Type:} Homogeneous (single node and edge type)
\item \textbf{Convolution Layers:}
\begin{itemize}
\item \texttt{GCNConv1}: Projects input features to hidden space with ReLU activation.
\item \texttt{GCNConv4}: Produces output node embeddings.
\end{itemize}
\item \textbf{Output:} Node embeddings used for edge prediction or passed to a classification model.
\end{itemize}
\textbf{Decoding Process:}
\begin{itemize}
\item Predicts edge scores using dot product of node embeddings.
\item Computes full graph adjacency matrix for all possible node pairs.
\end{itemize}
\textbf{Goal:} Predict edges or provide embeddings for downstream tasks.
\end{frame}
\begin{frame}{Classification Head for Node/Graph-Level Tasks}
\textbf{Classification Model:}
\begin{itemize}
\item Fully connected feed-forward network for downstream tasks.
\item \textbf{Structure:}
\begin{itemize}
\item 3 hidden layers with ReLU activations.
\item \texttt{Input Size:} Embedding size from the encoder (e.g., 78).
\item \texttt{Output:} Binary classification via sigmoid activation.
\end{itemize}
\end{itemize}
\textbf{Workflow:}
\begin{enumerate}
\item Encode node embeddings using GNN layers.
\item Pass embeddings to classification head for:
\begin{itemize}
\item Node/graph classification tasks.
\item Combining graph-based features with external data.
\end{itemize}
\end{enumerate}
\textbf{Goal:} Perform classification tasks using graph-based features.
\end{frame}
%------------------------------------------------
\begin{frame}{Homogeneous GNN Model for Edge Prediction}
\textbf{Architecture:}
\begin{itemize}
\item \textbf{Graph Type:} Homogeneous (single node and edge type)
\item \textbf{Convolution Layers:}
\begin{itemize}
\item 4 \texttt{GCNConv} layers for feature propagation and aggregation.
\end{itemize}
\item \textbf{Linear Layers:}
\begin{itemize}
\item Two optional linear transformations (\texttt{Linear1}, \texttt{Linear2}) for additional feature refinement.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Homogeneous GNN Model for Edge Prediction II}
\textbf{Encoding Process:}
\begin{enumerate}
\item Apply \texttt{GCNConv} layers sequentially:
\begin{itemize}
\item First 3 layers: Extract hierarchical features with ReLU activations.
\item Final layer: Produces the output embeddings.
\end{itemize}
\item Optional: Dropout for regularization and edge dropout to sparsify training graphs.
\end{enumerate}
\textbf{Decoding Process:}
\begin{itemize}
\item Computes dot products between node embeddings to predict edge scores.
\end{itemize}
\textbf{Full Graph Prediction:}
\begin{itemize}
\item Computes pairwise similarities between all node embeddings for adjacency matrix reconstruction.
\end{itemize}
\textbf{Goal:} Predict edges in a homogeneous graph.
\end{frame}
%------------------------------------------------
\begin{frame}{Heterogeneous GNN Model for Edge Prediction}
\textbf{Architecture:}
\begin{itemize}
\item \textbf{Node Types:} \texttt{stops}, \texttt{depot}
\item \textbf{Edge Types:}
\begin{itemize}
\item \texttt{route} (\texttt{stops} $\to$ \texttt{stops})
\item \texttt{remember} (\texttt{stops} $\to$ \texttt{stops})
\item \texttt{departs} (\texttt{depot} $\to$ \texttt{stops})
\item \texttt{return} (\texttt{stops} $\to$ \texttt{depot})
\end{itemize}
\item \textbf{Convolution Layers:} Uses \texttt{GCNConv} and \texttt{SAGEConv} for message passing.
\end{itemize}
\end{frame}
\begin{frame}{Heterogeneous GNN Model for Edge Prediction II}
\textbf{Encoding Process:}
\begin{enumerate}
\item Linear projections for \texttt{stops} and \texttt{depot} inputs.
\item \textbf{Two-stage Convolution:}
\begin{itemize}
\item \textbf{Stage 1:} Combines \texttt{route}, \texttt{remember}, and \texttt{depot} connections using GCN/SAGE layers.
\item \textbf{Stage 2:} Refines embeddings using updated node features.
\end{itemize}
\item Produces final embeddings for \texttt{stops} and \texttt{depot}.
\end{enumerate}
\textbf{Decoding Process:}
\begin{itemize}
\item Predicts sparse, directed \texttt{remember} edges using dot product of \texttt{stops} embeddings.
\end{itemize}
\textbf{Goal:} Predict sparse directed edges between \texttt{stops}.
\end{frame}
%------------------------------------------------
\section{Status}
%------------------------------------------------
\subsection{Results}
\begin{frame}{Status}
% \begin{table}
% \begin{tabular}{l l l}
% \toprule
% \textbf{Treatments} & \textbf{Response 1} & \textbf{Response 2} \\
% \midrule
% Treatment 1 & 0.0003262 & 0.562 \\
% Treatment 2 & 0.0015681 & 0.910 \\
% Treatment 3 & 0.0009271 & 0.296 \\
% \bottomrule
% \end{tabular}
% \caption{Table caption}
% \end{table}
\begin{itemize}
\item Over 24 000 randomly generated instances analyzed, with about 4 000 considered hard to solve, all of them of size 100 customers in the 100x100 square.
\item AugNg-Set available for over 20 000 of them.
\item Preliminar results of simple random forest allow solving most of the Solomon instances in competitive times. Models 1 and 2 are not easily distinguishable. Model 1 has default hyperparameters.
\item 3 models available, Model1(Acc=0,895), Model2(Acc=0,876) and Model3(Acc=0,785), all of them based on projective graph encoding using Graph Convolutional Networks (GCN).
\item Preliminar results of graph neural models allow solving most of the Solomon instances using model 2 in competitive times. Model 1 has the best results in terms of accuracy, but cannot propose sets for all Solomon instances. Model 3 shows lesser Accuracy but it's under development.
\end{itemize}
\end{frame}
\begin{frame}{Random forest summary}
\begin{columns}[c]
\column{.5\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=5cm]{rf_clf1_roc.png}
\caption{Model1, Acc=0,923}
\end{figure}
\column{.5\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=5cm]{rf_clf2_roc.png}
\caption{Model2, Acc=0,920}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Graph neural networks summary}
% \begin{table}
% \begin{tabular}{l l l}
% \toprule
% \textbf{Treatments} & \textbf{Response 1} & \textbf{Response 2} \\
% \midrule
% Treatment 1 & 0.0003262 & 0.562 \\
% Treatment 2 & 0.0015681 & 0.910 \\
% Treatment 3 & 0.0009271 & 0.296 \\
% \bottomrule
% \end{tabular}
% \caption{Table caption}
% \end{table}
\begin{columns}[c]
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{m1.png}
\caption{Model1, Acc=0,895}
\end{figure}
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{m2.png}
\caption{Model2, Acc=0,876}
\end{figure}
\column{.33\textwidth}
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{m3.png}
\caption{Model3, Acc=0,785}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Out-of-distribution. Solomon Instances}
\begin{table}
\begin{tabular}{l l l}
\toprule
\textbf{Model} & \textbf{In-Distr Sensitivity} & \textbf{Out-Distr Sensitivity} \\
\midrule
Random Forest & 0.930 & 0.273 \\
Deep Class. + GNN encoding & 0.933 & 0.025 \\
Homogeneous NG encoding & 0.064 & 0.049 \\
Heterogeneous NG encoding & - & - \\
\bottomrule
\end{tabular}
\caption{Comparing sensitivity of the model with a selection of Solomon Instances.}
\end{table}
\end{frame}
\subsection{Discussion}
\begin{frame}{Discussion}
\begin{itemize}
\item Reminder: Random Forest has access to engineered variables.
\item The effect of engineered variables should be studied on Deep Classification + GNN encoding on the classification head. No good results have been found on the encoding head.
\item Sensitivity: how to improve?
\item Remark: (node-) encoding a complete graph has a lot of disadvantages and some heuristics must be performed (kNN).
\item Heterogeneity in the nature of the graph hasn't proved interesting increases in performance. Lack of interesting properties to exploit maybe?
\item How to create a useful attention layer? Some research is available for adding scores to routes. In this research it's shown decreased performance (overfitting).
\end{itemize}
\end{frame}
%------------------------------------------------
% \begin{frame}{Theorem}
% \begin{theorem}[Mass--energy equivalence]
% $E = mc^2$
% \end{theorem}
% \end{frame}
% %------------------------------------------------
% \begin{frame}{Figure}
% Uncomment the code on this slide to include your own image from the same directory as the template .TeX file.
% %\begin{figure}
% %\includegraphics[width=0.8\linewidth]{test}
% %\end{figure}
% \end{frame}
% %------------------------------------------------
% \begin{frame}[fragile] % Need to use the fragile option when verbatim is used in the slide
% \frametitle{Citation}
% An example of the \verb|\cite| command to cite within the presentation:\\~
% This statement requires citation \cite{p1}.
% \end{frame}
% %------------------------------------------------
% \section{Planning}
% %------------------------------------------------
% \begin{frame}{Quarter projections}
% \includegraphics[width=0.99\linewidth]{calnov2024.jpg}
% \end{frame}
% %------------------------------------------------
% \begin{frame}{Theorem}
% \begin{theorem}[Mass--energy equivalence]
% $E = mc^2$
% \end{theorem}
% \end{frame}
% %------------------------------------------------
% \begin{frame}{Figure}
% Uncomment the code on this slide to include your own image from the same directory as the template .TeX file.
% %\begin{figure}
% %\includegraphics[width=0.8\linewidth]{test}
% %\end{figure}
% \end{frame}
% %------------------------------------------------
% \begin{frame}[fragile] % Need to use the fragile option when verbatim is used in the slide
% \frametitle{Citation}
% An example of the \verb|\cite| command to cite within the presentation:\\~
% This statement requires citation \cite{p1}.
% \end{frame}
%------------------------------------------------
\begin{frame}{References}
% Beamer does not support BibTeX so references must be inserted manually as below
\footnotesize{
\begin{thebibliography}{99}
\bibitem[Pecin, 2012]{p1} Pecin, Daniel (2012)
\newblock Branch-Cut-and-Price Algorithms for Vehicle Routing Problems
\newblock \emph{Column Generation 2012 Workshop}, GERAD, Montreal, Canada.
\newblock \url{https://www.gerad.ca/colloques/ColumnGeneration2012/presentations/session6/Pecin.pdf}.
\end{thebibliography}
\begin{thebibliography}{99}
\bibitem[Costa, 2019]{p3} Costa, Luciano and Contardo, Claudio and Desaulniers, Guy (2019)
\newblock Exact Branch-Price-and-Cut Algorithms for Vehicle Routing
\newblock \emph{Transportation Science}
\newblock \url{http://dx.doi.org/10.1287/trsc.2018.0878}.
\end{thebibliography}
\begin{thebibliography}{99}
\bibitem[Kipf and Welling, 2017]{p2} Kipf, Thomas N., and Welling, Max (2017)
\newblock Semi-Supervised Classification with Graph Convolutional Networks
\newblock \emph{arXiv preprint}, arXiv:1609.02907.
\newblock \url{https://arxiv.org/abs/1609.02907}.
\end{thebibliography}
}
\end{frame}
%------------------------------------------------
% \begin{frame}
% \Huge{\centerline{The End}}
% \end{frame}
\begin{frame}
% Print the title page as the first slide
\titlepage
\end{frame}
\section{Annex}
%------------------------------------------------
\begin{frame}{NG-Sets and ways of obtaining them}
An Ng-Path relaxation is a tool used for speeding-up elementarity in solutions obtained via SPPRC algorithm. These rely on the usage of Ng-Sets associated to each stop to compute a resource called the memory.
\begin{block}{Ng-Set}
Given $\Delta \geq 0$ a distance threshold and a graph $G=(V,E)$ with $V' = V \setminus \{0\}$ and positions $d_i $ $\forall i \in V$, the Ng-Set will be then defined by $\mathcal{N}_{i} = \{i \in V': ||d_j - d_i||_k \leq \Delta\} \forall j \neq i \in V'$
\end{block}
\begin{block}{Memory of a label}
Let $L$ be a label obtained in a previous iteration of an SPPRC following a path $V(L)$, then the memory associated will be $\Pi (L)=\{i_u \in V(L): i_u \in \cap_{s=u}^{k}\mathcal{N}_{i_s} \}$
\end{block}
A Label cannot be extended to a candidate that belongs to this memory.
\end{frame}
%------------------------------------------------
\begin{frame}{NG-Sets and ways of obtaining them}
Specifics adaptations to the SPPRC algorithms that must be defined for this memory resource are the following:
\begin{block}{Updating a label}
Let $L$ be a label obtained in a previous iteration of an SPPRC and extended to node $j$, then the new label after extension will be $\Pi (L')= \Pi (L)\cap \mathcal{N}_{j} \cup \{j\}$
\end{block}
\begin{block}{Dominance check}
Let $L_1, L_2$ 2 labels obtained by SPPRC. We say that $L_1$ dominates $L_2$ in sense of memory when $\Pi (L_1) \subseteq \Pi (L_2)$
\end{block}
\begin{examples}
There's other methods to create the Ng-Sets outside of the $\Delta$-method such as using k-NN, or iteratively reach the elementary lower bound as in the Augmented Ng-Set.
\end{examples}
\end{frame}
%----------------------------------------------------------------------------------------
\end{document}
\ No newline at end of file
JP pres/m1.png

38.7 KiB

JP pres/m2.png

39.4 KiB

JP pres/m3.png

40.2 KiB

JP pres/memoryless.png

53.5 KiB

JP pres/newplot.png

125 KiB

JP pres/newplot2.png

76.2 KiB

JP pres/newplot3.png

191 KiB

JP pres/rf_clf1_roc.png

22.3 KiB

JP pres/rf_clf2_roc.png

22.3 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment