PAST ≠ FUTURE
- Ravi Raghu

- Aug 18, 2021
- 4 min read
PAST ≠ FUTURE is one of the most important rules regarding the limitations of Big Data in Data Science. Yes Tony Robbins has said it too.
The meaning of the rule is that, the future cannot be predicted by observing the past.
The message of this principle is that, Any Data gathered represents only the past but does not represent the future as there is no data about the future. However, if one intervenes in the present by using the data collected (the past), The Data in The Future can be changed. If Data is events, Future is Changed.
The Key takeaway of this rule is “Intervention”. This Rule tells us that, if we intervene in the flow of data, we can change the future. It also reminds us that, the only way to change the future is to intervene in the present. If not, the data flows without any change.
The real power of Machine Learning is not Predicting, not Adapting, it is “intervening”. If Humans use Machine Learning to “intervene” the Algorithm achieves power that no Human Can Ever Have.
On One Hand, The Human Mind is highly capable of adaptation. On the Other Hand, it is extremely slow to adapt. If Biological Adaption is considered as a Gold Standard, then a Machine Learning Algorithm is totally incapable of “adapting”. On the contrary, Humans are incapable of intervening with the future. Humans can only “adapt”. We seem to be confusing “adapting” and “intervening” today.
Humans have only one way to adapt. They can make changes in the present, keeping in mind that, the future is uncertain. This is because, humans do not have the power to objectively examine data like an algorithm. If we did, we will “intervene” not adapt.
Humans attempt to predict the future by observing emotions about the past. Example, say we hold data that, all trains from Tambaram Suburban Railway station started at 04.30AM for the last 10 years, a human expects that, the train will start at the same time for the next 10 years. Expectation is a feeling. One may look at timetables or charts, but the decision is formed as a feeling in the human mind, not as an objective data set. This expectation is changed if the authorities say that, the train will start at 05.30AM from January 1,2022. Now, If You can imagine the situation, you can simultaneously also imagine the gossips and discussion around the above mentioned fact. Countless humans will wonder why this rule was changed and for what purpose. Why do we imagine facts about facts? Is it because we are Human? Or is it because We Humans are built that way? All this gossips and discussions are centered around what emotions humans exhibit to the change? Some may welcome it, some may curse it. In the end, its all feelings.
The hilarious nature of humanity is that, we never admit our feelings as it is. If we curse someone, if asked, we will most certainly say it is because of the other person’s action, not because of how we feel about the action of the other party. The truth may be otherwise. We are very good liars.
Its only in case of horrific tragedies, that external events and internal feelings come very close together. If a loved one died during the pandemic, it is nevertheless a horrific event, and someone close to the person will feel the same way. This represents a gap, a gap where there is no possibility to adapt. We cannot change the present. We cannot intervene. All we can do is grieve and accept the loss.
Here comes, the question, what if Machine Learning Algorithms can help prevent tragedies? This is the singular reason that Machine Learning is making tremendous forays into health care. The algorithm cannot adapt, but can intervene. Humans need a tool to intervene in case of death of a loved one. Seen from this point, there is tremendous hope in Machine Learning, but it is not without a catch.
The catch is Choice. That Choice is not in the hands of Machine Learning Algorithms. It is in the hands of humans. Applying Human Thinking, if we fail to make the correct choice now (acting in the present), we cannot intervene with the data that will be produced as a result of the choice (inability to change the future). If Humanity in total choose to accept all their feelings as they are and if Humanity chooses to express its feelings as they are, then, the choice is likely to be a correct one( again human thinking, many may disagree). If not, Machine Learning will intervene, but may intervene in a wrong way.
This is once again a Human making predictions based on feelings, but it is proved already. When Machine Learning Algorithms are not controlled by the Humans whom they “intervene” with, rather with the Humans who “Wield” them, then, mistakes happen, Bias is induced and human problems are amplified. The key issue here is the Human who wields. What does that weilder really want? What does he feel about the humans he wields the algorithm at? What does he feel that the algorithm can be used for? Is The Wielder's Choice of more value than others? These questions might seem cliche` (past data) but it is likely not to change unless intervened with.
When we do not make the choice that is right - attuned with our feelings, either the past appears to us as Nostalgic and we crave to be in the past. Or, the Past appears to us as horrific and we do everything to escape the past. Same thing can be said about the Machine predicted future that is informed to us now. But we have no idea as to what is right and what is not? The past is not all that nostalgic, the future may not be that bad. The only thing that we certainly have is the present and our puny human minds.
Humans cannot change the future. Humans Can Only Adapt.
Machines cannot adapt. They can only Intervene.
Adapting Is a Choice. Intervening Is A Tool.
So, I Choose To Say that Choosing to Accept Human Feelings may be, The First Step To Adapting to A Machine Intervened Future.



Comments