Skip to content

Naive Bayes Complexity

Naive Bayes is very fast and scales well

1. Linear Complexity

Naive Bayes algorithms have (N*P) Time Complexity for training where N is data size (or number of rows) and P is the size of features. This means Naive Bayes training time complexity can change anywhere between Linear Time Complexity O(N) and Cubic Time Complexity O(N^3)  (if P=N^2) depending on the value P takes. (It can also be Quadratic Time Complexity O(N^2) if P=N).

For prediction process Naive Bayes algorithms have P time complexity for prediction processes. It can also be explained as O(P) hence this can also be called Linear Time Complexity based on the feature size.

Naive Bayes is one of the fastest machine learning algorithms to work with. There are several reasons for this. Regarding “training stage” only basic calculus operations are utilized for learning process; these are adding (counting) and dividing.

Naive Bayes is even faster during the prediction stage because it stores the apriori and conditional parameters that are learned during the training phase and uses the same parameters without any additional steps during the prediction (or inference) phase. Also only very simply addition and multiplication operations are used to satisfy the Naive Bayes Theorem at prediction stage. So, besides potentially scaling well thanks to its Linear Time Complexity (when feature size is low), Naive Bayes also tends to be computation wise very fast in most modern day computers.

Please note: Algorithmic complexity is not necessarily a good indicator of nominal runtime values since it neglects coefficients. For example, let’s look at two algorithms with linear Big O Complexity O(N). Time complexity can be 100*O(N) for one algorithm while it’s 0.5*O(N) for the other, a potential 200X performance difference. Though in real life differences are not always so exaggerated it can be good to be aware of this phenomenon.

Additionally, commonly used Big O notation specifically measure worst-case scenario and this is not always the state when an algorithm is being used.

A Kernel for Non-Linear Data

3. Data Size

Being so performant with perfect scaling capabilities Naive Bayes won’t give you any problems when you’re working with large datasets or big data.

Naive Bayes computation is trivial for modern day computers and since it stores the needed variables after training process, it will predict whatever amount of data like a breeze.

You can work on datasets with 10 Million or even 100 Million rows without performance problems.

This attribute makes Naive Bayes a commonly used Machine Learning algorithm. And sometimes when performance is a serious consideration, data is too big and/or dimensional with lots of features Naive Bayes can be a significantly strong algorithm candidate.

A Kernel for Polynomial Data

4. Feature Size

Feature Size won’t affect Naive Bayes significantly compared to most other algorithms. Theorethically Naive Bayes time complexity for worst case (Big O) can go all the way up to Quadratic Complexity if feature size equals row size.

In most cases big data won’t have equally large feature space for Naive Bayes complexity to reach quadratic complexity and it will remain linear. For example if dataset has 1.000 features and 1.000.000 samples, this will mean O(1000*N) worst case time complexity. Since constants don’t count in time complexity calculations it will simply equal O(N) with slightly increased runtime duration.

Summary

In summary Naive Bayes scales very well and can be used with dimensional data as well as big data. Because of high performance in addition to high accuracy and probabilistic report capabilities Naive Bayes have been deployed for solutions widely adapted by public such as first spam classifiers in mid 90s which evolved to commerical spam applications in late 90s.

If you’d like to see a more detailed advantage list of Naive Bayes Algorithm:

If you’d like to see a practical implementation of Naive Bayes Algorithm:

Thanks for using our Machine Learning Tutorials. Machine Learning is a booming field along with other tech and science subjects and studying it is smart self-investment as its popularity shows no sign of slowing anytime soon.

Instead, the field continues growing and gaining more depth. We wish you luck and persistence pursuing this lucrative field directly or as a supporting tool to other domains you are interested in.