PM IAS THE HINDU EDITORIAL ANALYSIS JUNE 17

Railway safety — listen to the voices from below

Introduction

  • Nothing focuses the nation’s collective attention on the Indian Railways as a major accident. The triple train collision at Bahanaga Bazar railway station, near Balasore in Odisha on June 2, which led to the tragic loss of over 280 lives, has evoked all the expected responses from various quarters, offering explanations as to how the accident occurred and remedial measures to prevent accidents in the future, and comparisons with Railway systems abroad. In short, there is an overwhelming sense of déjà vu.

Safety and the information flow

  • This concerns the flow of information regarding unsafe practices or situations on a real-time basis.
  • Unlike many other organisations or industries, where the activities or operations are concentrated more or less in a limited area physically the activities of the Railways are spread geographically over a wide area, involving a multiplicity of disciplines (departments) that need to work in close coordination on a real-time basis to ensure the smooth and safe running of trains.
  • In order to ensure uniformity in the compliance of rules and regulations and safety in operations, a large number of codes and manuals have been evolved for different departments over the decades to standardise the procedures as far as possible.

Top-down approach

  • Ever since the inception of the railways in this country, periodic field inspections by authorities at various levels have been one of the main tools for the management to ensure compliance with laid-down procedures and standards of workmanship.
  • While this system has, by and large, stood the test of time over the decades, it suffers from a few drawbacks, particularly in the context of railway safety.
  • By its very nature, the “top-down” approach places the onus of detecting deviations from the norm on the higher authorities.
  •  It becomes a veritable “cops and robbers” scenario, in which the higher authority looks down on the staff at the cutting-edge level with suspicion and distrust; and, conversely, the staff at the lower levels adopt an attitude of “catch me if you can”.
  • It encourages window dressing and sweeping of problems under the carpet. Transparency and frankness are usually the casualties in such a situation.
  • Detection and rectification of such deviations at the earliest opportunity can prevent many unsafe situations from developing into serious accidents.
  •  While in every case a remedy may not be available, even becoming aware of the shortcomings on a real-time basis can often help the management in avoiding a major disaster.

Confidential Incident Reporting and Analysis System (CIRAS)

  • The system was developed by one of the British universities nearly three decades ago for application on the British Railways in the mid-1990s.
  • The underlying philosophy is to encourage the lower staff to point out deviations on a real-time basis, maintaining the confidentiality of the reporter, and encouraging the expression of frank views.
  • The system, in effect, turns the conventional top down inspection on its head. This is in fact an example of real empowerment of staff.
  • With the rapid advances in communications and information technology since CIRAS was developed nearly three decades ago, the introduction of a similar reporting system on the Indian Railways should not be difficult.
  • However, there is a need to sound a note of caution. The success and effectiveness of a CIRAS-like reporting system depends not only on putting in place the physical infrastructure but also a total change in the mindset of the management, from the highest to the lower levels, vis-à-vis the staff at the field level.
  • There has to be an attitudinal change from the conventional approach of fault-finding and punishment to a more enlightened ethos of a shared commitment to ensure safety at all levels.
  • The aim should be to correct, not punish. Listen to the voices from below and act. Effecting this change is not easy.

Way forward

  • Perhaps it is time to have a serious rethink on the recently introduced Indian Railways Management Service (IRMS) scheme, which is bound to destroy whatever loyalty and sense of “ownership” that exists towards a particular discipline (department) amongst the management cadre.
  • It is perhaps also time to revert to the earlier system of having a full-time Cabinet Minister for the Railways.
  • Unprecedented levels of investments at a time when the organisation is going through a challenging phase of transformation amidst many external challenges requires undivided attention at the highest policy-making level.

Editorial 2: Reflections on Artificial Intelligence, as friend or foe

Introduction

  • Artificial Intelligence (AI) has been dominating the headlines for its triumphs, and also fears being expressed by many including some of the best minds in AI. Several leading AI experts and thinkers have been part of different cautionary messages about AI. There is deep concern about AI among many who know it.

Artificial intelligence

  • Artificial General Intelligence (AGI) refers to intelligence that is not limited or narrow. Think of it as human “common sense” but absent in AI systems.
  • Common sense will make a human save his life in a life-threatening situation while a robot may remain unmoved.
  • There are no credible efforts towards building AGI yet.
  • Many experts believe AGI will never be achieved by a machine; others believe it could be in the far future.

Areas of use, limitations and AGI

  • AI systems are capable of exhibiting superhuman performance on specific or “narrow” tasks, which has made it to the news in the field of games like chess and also in biochemistry for protein folding.
  • The performance and utility of AI systems improve as the task is narrowed, making them valuable assistants to humans. Speech recognition, translation, and even identifying common objects such as photographs, are just a few tasks that AI systems tackle today, even exceeding human performance in some instances.
  • Their performance and utility degrade on more “general” or ill-defined tasks. They are weak in integrating inferences across situations based on the common sense humans have.

ChatGPT – AI Tool

  • ChatGPT is a generative AI tool that uses a Large Language Model (LLM) to generate text.
  • LLMs are large artificial neural networks that ingest large amounts of digital text to build a statistical “model”.
  • Several LLMs have been built by Google, Meta, Amazon, and others.
  • ChatGPT’s stunning success in generating flawless paragraphs caught the world’s attention. Writing could now be outsourced to it.
  • Some experts even saw “sparks of AGI” in GPT-4; AGI could emerge from a bigger LLM in the near future.
  • True AGI will be a big deal if and when it arrives. Machines outperform humans in every physical task today and AGI may lead to AI “machines” bettering humans in many intellectual or mental tasks.
  • Bleak scenarios of super-intelligent machines enslaving humans have been imagined. AGI systems could be a superior species created by humans outside of evolution.
  • AGI will indeed be a momentous development that the world must prepare for seriously.

The dangers

  • Superhuman AI: The danger of a super intelligent AI converting humans to slaves.
  • Malicious humans with powerful AI: AI tools are relatively easy to build. Even narrow AI tools can cause serious harm when matched with malicious intent. LLMs can generate believable untruths as fake news and create deep mental anguish leading to self-harm. Public opinion can be manipulated to affect democratic elections. AI tools work globally, taking little cognisance of boundaries and barriers.
  • Highly capable and inscrutable AI: AI systems will continue to improve and will be employed to assist humans. They may end up harming some sections more than others unintentionally, despite the best intentions of their creators.
  • Another worry is about who develops these technologies and how. Most recent advances took place in companies with huge computational, data, and human resources. ChatGPT was developed by OpenAI which began as a non-profit and transformed into a for-profit entity. Other players in the AI game are Google, Meta, Microsoft, and Apple. Commercial entities with no effective public oversight are the centres of action.

India must be prepared

  • Awareness and debate on these issues are largely absent in India.
  • The adoption of AI systems is low in the country, but those used are mostly made in the West.
  • We need systematic evaluation of their efficacy and shortcomings in Indian situations.
  • We need to establish mechanisms of checks and balances before large-scale deployment of AI systems.
  • AI holds tremendous potential in different sectors such as public health, agriculture, transportation and governance.
  • As we exploit India’s advantages in them, we need more discussions to make AI systems responsible, fair, and just to our society.
  • The European Union is on the verge of enacting an AI Act that proposes regulations based on a stratification of potential risks.
  • India needs a framework for itself, keeping in mind that regulations have been heavy-handed as well as lax in the past.

Conclusion

  • Everything that affects humans significantly needs public oversight or regulation. AI systems can have a serious, long-lasting negative impact on individuals. Yet, they can be deployed on mass scale instantly with no oversight.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *