top of page

Algorithmic Decision Making In Securities Trade: Assessing Liability And Regulatory Challenges

- by Raghav Sharma, student at Christ (Deemed to be) University, Bangalore. This is 8th winning entry of the National Article Competition organized by CBLT.


Abstract

The fast integration of algorithmic and AI-dependent systems into securities trading has raised a severe challenge to traditional legal frameworks, more specifically on liability allocation..  Automated trading delegates more and more decisions to algorithms, leaving no obvious human actor to hold accountable. Traditional fault-based liability regimes, by contrast, presume a human whose negligence or intentional misconduct can be identified and blamed  and can’t capture the cases where the actions are caused by autonomous or semi-autonomous systems without direct human control at the time of harm. There is a special need for the resolution of this problem in the area of financial markets, where trust and accountability are crucial. When AI systems engage in market manipulation, generate investor losses, or react negatively to ‘wrong’ inputs, the legal system still does not know how to assign liability – to be put on the developer of the algorithm, on the deploying firm or even the regulator. Beyond machine learning and deep learning algorithms, other algorithms are becoming increasingly opaque and autonomous, further making it less clear whose responsibility it is to hold them accountable. This paper aims to analyze the frameworks adopted by jurisdictions like the U.S., E.U., and India and suggest a more pragmatic and effective liability framework.

Keywords: Securities Regulation, algorithmic trading, liability, Policy Recommendation

 

Technology is a useful servant but a dangerous master.”

Christian Lous Lange (historian and political scientist)


Introduction

The global securities markets have transformed and fundamentally changed by the rise of algorithmic trading, which is expected to reach USD 4.06 billion by 2032[1]. Once, whose decisions were based on intuition, on human decision-making, has been replaced with highly automated systems, able to trade thousands of times a second, on data and algorithmic logic. The amount of trades initiated, executed, or influenced by Artificial Intelligence, Machine learning and NLP[2]-powered algorithmic systems from HFT[3] firms to institutional asset managers in major financial exchanges is now a significant proportion of trades, with about 70% of total trades carried about by algo trading.[4] Despite bringing much of this technological shift undeniable efficiency, liquidity, and market depth, these systems have also started to raise serious questions of liability and accountability whenever such systems degrade, fail, or have impacted one's losses. Though the use of algorithms allows measurable improvements in markets on multiple informational efficiency metrics, the efficiency of capital allocation through productive allocation suffers due to their operations.[5] It could lead to reduced liquidity[6] and a wasteful arms race[7]. Traditionally, securities trading had relied on a framework of traditional liability for a number of reasons, but the recent fast integration of algorithmic and AI-driven systems into securities trading has created big challenges to this type of arrangement. At the same time, it’s harder to assign blame or legal responsibility because trading gets more and more automated and decision-making making more and more automated with algorithms. Unlike the liability systems of traditional law, for which it is assumed that a human actor can be negligent or intentionally harmful, the systems used here are contingent on the absence of a human actor. Yet it can’t capture the case where the actions are caused by autonomous or semi-autonomous systems without direct human control at the time of harm.

In this regard, this issue is critical in financial markets where trust and accountability are fundamental. When AI systems engage in market manipulation, generate investor losses, or react negatively to ‘wrong’ inputs, the legal system still does not know how to assign liability – to be put on the developer of the algorithm, on the deploying firm or even the regulator. But recent advances in building algorithms using machine learning (or deep learning) make it increasingly more difficult to track accountability. There are various jurisdictions that have tried to respond. Pre-trade risk controls are mandated by the SEC’s Rule 15c3–5[8] and are illegal under the Dodd Frank Act,[9] along with other fraudulent practices such as spoofing. In U.S. v. Coscia[10], the accused used algorithmic trading to place and cancel large orders to manipulate market prices, benefiting smaller trades. He was sentenced to three years in prison for commodities fraud and spoofing under the Dodd Frank Act. Under MiFID II and the Market Abuse Regulation (MAR), as with other regulations in the European Union, firms in the European Union which use an algorithmic trading system are required to establish a governance structure, conduct stress testing and disclose trading strategies to regulators. One of the algorithms that could be classified as high risk under the proposed EU Artificial Intelligence Act could be algorithms used in trading.


CROSS-JURISDICTIONAL ANALYSIS

The regulation of liability for algorithmic decisions in trading within the securities markets is rapidly evolving across jurisdictions. With more and more being traded in an autonomous or semi-autonomous fashion because of high frequency and AI-driven algorithmic trading in crypto and other financial markets, the UK, the EU, and Indian regulators all face challenges in adapting traditional legal doctrines to the autonomous or semi-autonomous actions of software-based trading agents. This section compares the respective text, standard, and regulatory frameworks to civil, criminal, and regulatory liability based on algorithmic trading.


United States 

Almost no country has such a sophisticated and mature securities regulatory framework as the United States. The main agencies which oversee securities and derivatives markets are the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC), which are proactive in addressing the risks of algorithmic trading, these agencies have made an effort to do so.  In terms of common law doctrines, civil liability in the context of algorithmic trading in the U.S. may exist based on negligence, breach of fiduciary duty, or product liability, or any other unlisted circumstance. Thus, if an algorithm is negligently designed and causes substantial financial losses to investors, they may sue the developer or the same if the plaintiff is able to show duty of care, breach, causation and damages.

The provision of the Securities Exchange Act[11] provides the authority to the SEC to bring an enforcement action against the party that engages in manipulative or deceptive practices, including algorithmic spoofing, layering, or quote stuffing. The Michael Coscia case[12] is one of the most notable cases in which the defendant was sentenced for designing algorithms to place and cancel large numbers of orders in order to manipulate the market prices. A conviction under the Dodd Frank Act[13] was made for spoofing, which had specifically added provisions for criminalization for certain algorithm trading strategies. Regarding regulatory controls, SEC Rule 15c3–5[14] mandates the firms to implement the risk management controls to prevent erroneous transactions as well as market manipulation by the automated systems. Pre-trade risk checks and controls over order flow shall also be included.

Despite the fact that algorithms themselves cannot be illegally held criminally liable, human actors who design, deploy or supervise them can be criminally liable if they constitute willful market abuse. But the most pressing problem is showing scienter (intent or knowledge of illegality) in the presence of behaviors not under the sole control of any man. Until now, the U.S. courts have found that the developers, developers, and firms responsible for designing and overseeing such systems are responsible if they do not conduct and allow the systems to run accordingly to proper controls.


European Union

As the EU creates regulations surrounding the functioning of algorithmic trading, it has formed three facets of regulation in the area, namely, the MiFID II Directive[15], the Market Abuse Regulation (MAR)[16], and the AI Act. Specifically, MiFID II, came into effect in January 2018, and as such, addresses the risks of algorithmic trading.[17] Article 17[18] requires that any such investment firms using algorithmic trading systems ensure that their systems are robust, tested and monitored continuously. Article 17(1)[19] requires that firms have effective risk controls and perform algorithmic stress testing of automated trading strategies, and human oversight. As per Article 17(2),[20] firms are also required to notify regulators and register their algorithmic trading activities with trading venues. Additionally, the MAR blocks any types of insider trading[21] and market manipulation[22], not just actions that were achieved via the use of algorithms. For example, spoofing is explicitly prohibited under MAR as it is market manipulation by the use of algorithms. This does not mean that MAR differentiates between human and machine conduct; liability still falls on the firm or individual who controls the trading algorithm.

The EU AI Act extends the scope of liability from civil liability to regulatory liability; beyond that, deployers, developers, and even users of high-risk AI systems are likewise liable if they do not meet severe safety requirements.[23] Also, the EU has launched the proposal of the DORA[24] to mandatorily demand financial entities to be resilient against ICT disruptions (i.e., failures of algorithmic trading platforms). However, EU law doesn’t tend to impose criminal liability for unintentional acts by a misbehaving algorithm, but if an entity is deemed to have broken compliance requirements, entities can be fined and banned from market practices. There may also be civil liability under national laws of Member States pursuant to tort or contract.


India

India’s algorithmic trading framework is relatively nascent but growing in line. The main regulator is the SEBI[25] which has issued circulars on automated trading. The SEBI introduced a framework dealing with algorithmic trading in the year 2012 and defines it as “any order that is generated using automated execution logic”.[26] Pre-trade risk controls, post-trade surveillance, as well as approval of algorithmic strategies need to be done before deployment is allowed according to the framework.[27] In addition, algorithmic orders must all go through the synthetic risk checks and co-location facilities are also regulated to ensure that there are no unfair advantages.[28] In particular, India has not implemented the kinds of specific legislation such as the U.S. Dodd Frank Act or the EU MiFID II. In place, it invokes provisions existing under the SEBI Act[29], the Securities Act[30] and the IT Act[31] to address any wrongdoing in algorithmic trading. The fraudulent and unfair trade practice, including the one through automated systems, under Section 15HA[32] of the SEBI Act, is punishable with a penalty.

The SEBI’s readiness to take enforcement action even without dedicated algorithmic legislation is evident from its penalty to the company, like Karvy Stock Broking Limited, for using automated systems to misappropriate client funds.[33] The basis for India’s current liability framework is regulatory enforcement and administrative penalties. In the realm of tort law, civil liability may exist if an investor’s losses can be attributed to negligence regarding programming or oversight of algorithms. But courts are yet to have a consistent jurisprudence in regards to such matters. In case of fraud, criminal liability may be pursued under Section 318[34] of the BNS or under Section 66[35] of the IT Act, though they tend to be seldom invoked in the context of algorithmic trading. Product liability for software defects is not accorded in India until the same is integrated with a physical product. There is also no explicit law to recognize AI systems as accessible “agents” for legal enforcement.


Comparative Observations

Across jurisdictions, the “black box” problem arises because decisions made by complex machine learning algorithms lack transparency and cannot be easily interpreted. This opacity makes it difficult to understand how these automated systems reach their conclusions. It thus, restricts regulators and courts with regard to the allocation of liability in cases of harm that lack demonstrable idea or negligence. The U.S model is based on punitive and criminal sanctions, including those related to intentional misconduct, and a sophisticated enforcement mechanism. Instead, the EU approach is more about compliance and, in this regard, it puts strong pre-emptive controls and governance mechanisms in place and is heading towards the codification of liability specific to AI.

 There is a unique paradigm to the regulatory nature of algorithmic trading liability in India which does not correspond to that of the enforcement-directed U.S. system or even the governance-dependent EU system. Although SEBI has implemented robust structural safeguards in its 2025 retail algorithmic trading framework[36], such as unique identifiers of any algorithmic orders, liability by the broker to any third-party platform, and strict empanelment requirements of the providers of algorithm, the sharing of liability aspects has not been developed in a holistic manner as compared to its international peers. In contrast to the explicit distribution of liabilities in the criminal liability provisions of the Dodd-Frank Act or the framework in the AI Act to hold deployers to account, India uses a more regulatory-compliance and market surveillance-based framework. Recent examples of enforcement include recent market manipulation allegations against Jane Street resulting in the blocking of accounts totaling 565 million[37]. However, market participants do not have clarity regarding application due to the lack of codified principles on algorithmic liability. This loophole is especially notable in the Indian context where higher than normal volumes of derivatives trading volume exceeds equity markets levels, a fact that is ripe to cause havoc in terms of algorithmic exploitation that cannot be put fully under check by the current regulations. While AI remains hot across the globe, instead of a focus on the regulatory front, India has mostly retained its focus on civil remedies or regulation of AI, but increasingly, enforcement actions are targeting such algorithmic abuse.


ree

 

Proposed Liability Framework

Given the sophistication and autonomy of these algorithms, the complexity, opacity, and ever-greater embeddedness in securities trading of algorithmic decision making, the existing legal and regulatory structures to protect against the misuse of their products and services fail to keep abreast of these platforms. The current frameworks of all of these jurisdictions are mostly themselves based on the adaptation of traditional legal doctrines like negligence and corporate responsibility, but with the important qualification that they are not designed to solve the peculiar nature of algorithmic systems. A forward-looking multi-tiered liability framework must be created in view that it ensures accountability while causing innovation, one which will balance regulatory oversight, market efficiency and investor protection.

The tiered model of responsibility is a very important pillar of the proposed liability framework, as this distributes the responsibility among the principal actors in the lifecycle of the algorithmic trading system — developers, deployers (trading firms and brokers), and regulators. The model replicates well-established principles in the law of chain liability with respect to product liability, and merges the specific technical and operational risks inherent in AI systems.


Developer's Liability

Algorithmic trading system developers who use machine learning or neural networks to develop the systems should literally be held liable for defects in data, design, or training data that cause harm to the market or investors.[38] Just as manufacturers of tangible products are in the best position to anticipate and mitigate the errors of the systems, the code vulnerabilities and algorithmic bias, developers testing mainframes are in the best position to do so. “Algorithmic duty of care” as a statutory principle is recommended to be adopted. It would mean that developers will be required to use reasonable means to secure the safety, reliability, and audibility of their trading algorithms, such as training on non-discriminatory, representative datasets.[39] Constant stress testing and back testing under different market conditions. Safeguards that help the market prevent being manipulated or experiencing a flash crash. Clear documentation of model logic, even for black-box systems. Therefore, obligations such as those would not only strengthen trust in automated trading but also provide legal clarity on who to blame when the unexpected failure occurs.


Trading Firm/Fund Manager Liability

The construction of the system is the developer’s part, and trading firms or brokers are responsible for its proper use and monitoring. Liability here would involve negligence in the procedural sense (e.g. pre-trade risk check, kill switch, human in the loop), and a Mathematical Distributional Law interpretation could apply. Trading bots’ behavior should be monitored in real time by firms. Record all steps to take and trade in logs for auditing purposes. Tell clients and regulators about the nature and risks of the algorithmic strategies introduced. Enforce your market abuse laws, such as prohibition in spoofing, layering and wash trades. Civil and criminal penalties are possible, as well as license suspensions and felony charges in egregious cases where the standards are not met. While the Market Access Rule of the U.S. SEC and the MiFID II Article 17 of the EU are examples of these rules, stronger enforcement and statutory backing can convince them to follow these rules.


Regulatory Authority Liability and Oversight Duties

A major part of the proposed framework is the release of very specific guidelines, frequent audits, and a centralised AI incident reporting platform. Algorithmic models should be accredited by regulatory authorities through standardized approval process and should be mandated to accredit the models. Allow the industry to run industry-wide compliance sandboxes to effectively test new trading technologies. Develop sensible explainability thresholds for high-impact algorithms (by drawing from the EU AI Act.) If there are allegations of market manipulation, they must require audit trails along with submissions of the algorithmic source code. Such mechanisms of oversight would improve investor confidence and protect market integrity by gradually closing an evolving gap between law and technology.


Policy Recommendations

Legal reforms must provide a codified basis for how liability is to be (re)attributed and (re)compensated. Different jurisdictions should establish a minimum benchmark of algorithmic safety and diligence that would be the statutory imposition of a duty of care for developers and firms involved in securities trading. This could have a similar form to tort law, where if deviation from a norm is established, there is liability. Under its algorithmic trading guidelines, SEBI could introduce such a clause, while the EU and US might amend currently existing financial laws in the wake of a concept to establish such standards. Countries should also sign each trading algorithm and associated risk models, use and back-test outcomes to a central registry for transparency and regulatory awareness. Regulators would also be able to see systemic exposure and interdependency that could lead to contagion risks, and this would also help this data to track.

On the front end, a liability pool funded by the industry or mandatory insurance coverage for algorithmic operations may be appropriate in light of the scale of such damages. As with banks and their contribution to deposit insurance schemes, trading firms could contribute to a compensation fund used when it is confirmed that an algorithm misbehaved. In other words, swift redressal to an injured investor with no excessive adverse impact on an individual actor. Liability shouldn’t be used as a weapon to deter innovation. Therefore, it also needs to enable compliance by design, and its design should encourage safer algorithms. Take, for example, an instance in which firms that follow certified “ethical AI” standards may receive reduced compliance expenses, can receive priority processing, etc. Regulators could create regulatory sandbox opportunities for new technologies that have shown control of risk.


ree

Conclusion

Although algorithmic decision-making in respect of securities trading is becoming increasingly legalistic, and the jurisdictions increasingly react to this with varying laws, the legal landscape seems and remains quite fragmented. The United States accentuates the strict punitive enforcement and criminal liability; the European Union insists on the regulation obedience and structural governance, whereas India leans towards the regulatory supervision without yet codified robotic agents’ liability. There is a lot of diversity among these varied approaches, which reflect institutional capabilities and market structures differently, but all are committed to the same shared challenge of adapting liability norms to such technologies used more and more without direct human involvement. The proposed multi-tiered liability framework identifies the various levels of liability of developers, trading entities and regulators to bridge this gap. As creators of the systems, developers have an “algorithmic duty of care” to secure and auditable the training data, model architecture and deployment conditions. The responsibility of such systems deployment falls on trading firms, that is, the intermediaries, who are obliged to adhere to standard processes associated with risk reduction, for instance, kill switches and the use of human oversight. In order for regulatory bodies to shift toward encouraging active accreditation, transparency mechanisms and regulatory sandboxes that encourage innovation but do not compromise safety.

Taken together, these tiers produce a liability structure with a balance and a bit of forward compatibility. It also importantly includes innovative mechanisms such as liability insurance pools and regulatory incentives for firms that use ethical AI standards. Provisions such as these will ensure speedy redressal for injured investors and, at the same time, do not stifle the evolution of the market. The more algorithmic systems are routinely used in securities markets, the more important it is to build into their operation legal clarity and trust. What this framework provides is a practical means for legal accountability to fit into the needs of modern technological infrastructure. Thus, this balanced structure deters reckless automation while building trust in automated markets. Embedding clear legal obligations and structured oversight into algorithmic trading ensures that these powerful tools remain our servants, not our masters, empowering markets to innovate safely rather than succumbing to unchecked complexity.


[1] Allied Market Research published a report, titled, "Algorithmic Trading Market by Component (Solution and Services), Deployment Mode (On-premises and Cloud), Type (Stock Markets, FOREX, ETF, Bonds, Cryptocurrencies and Others), Type of Trader (Institutional Investors, Long-term Traders, Short-term Traders and Retail Investors): Global Opportunity Analysis and Industry Forecast, 2024-2032".<https://www.alliedmarketresearch.com/algorithmic-trading-market-A08567> accessed 9 April, 2025.

[2] Natural Language Processing.

[3] High Frequency Trading.

[4] Dr. S. Subha, ‘Role of Artificial Intelligence in Stock Trading’ (2025) 7(1) Thiagarajar College of Preceptors Edu Spectra <https://www.eduspectra.com/feb2025/edu_spectra_v7s1_005.pdf> accessed 8 April, 2025

[5] Yesha Yadav, “How Algorithmic Trading Undermines Efficiency in Capital Markets” (2015) 68 Vand. L. Rev. 1607, 1619 <https://cdn.vanderbilt.edu/vu-wordpress-0/wp-content/uploads/sites/278/2015/11/19120019/How-Algorithmic-Trading-Undermines-Efficiency-in-Capital-Markets.pdf> accessed 9 April, 2025.

[6] Andrei A. Kirilenko/Andrew W. Lo, “Moore’s Law versus Murphy’s Law: Algorithmic Trading and Its Discontents” (2013) 27 J. Econ. Perspect. 51, 60 <https://dspace.mit.edu/bitstream/handle/1721.1/87768/Lo-Moore%27s%20law.pdf> accessed 9 April, 2025.

[7] Supra note 3.

[8] Securities and Exchange Commission, 'Risk Management Controls for Brokers or Dealers with Market Access' (17 November 2010) 75 Fed Reg 69792, Rule 15c3–5.

[9] Dodd-Frank Wall Street Reform and Consumer Protection Act 2010, Pub L No 111–203, 124 Stat 1376 (2010).

[10] United States v. Michael Coscia, 866 F.3d 782 (2017).

[11] Securities Exchange Act of 1934, 15 USC § 78a (1934), as amended by Public Law 112-158 (10 August 2012).

 

[13] Supra note 9.

[14] Supra note 8.

[15] Markets In Financial Instruments (Directive 2014/65/Eu of The European Parliament And of The Council of 15 May 2014).

[16] Market Abuse Regulation (Regulation (Eu) No 596/2014 of The European Parliament And of The  Council of 16 April 2014).

[17] MiFID II Directive, art. 93(1).

[18] MiFID II Directive, art. 17.

[19] MiFID II Directive, art. 17(1).

[20] MiFID II Directive, art. 17(2).

[21] MAR Directive, art. 14.

[22] MAR Directive, art. 15.

[23] Artificial Intelligence Act (Regulation (EU) 2024/1689 of The European Parliament And of The Council of 13 June 2024).

[24] Digital Operational Resilience For The Financial Sector (Regulation (EU) 2022/2554 of The European Parliament And of The Council of 14 December 2022).

[25] Securities and Exchange Board of India.

[26] SEBI Circular No. CIR/MRD/DP/09/2012 dated March 30, 2012. <https://www.sebi.gov.in/sebi_data/attachdocs/1333109064175.pdf> accessed 9 April, 2025.

[27] SEBI Circular No. CIR/MRD/DP/16/2013 dated May 21, 2013. <https://www.sebi.gov.in/legal/circulars/may-2013/broad-guidelines-on-algorithmic-trading_24790.html>  accessed 9 April, 2025.

[28] ibid.

[29] The Securities And Exchange Board Of India Act, 1992.

[30] Securities Contracts (Regulation) Act, 1956.

[31] The Information Technology Act, 2000.

[32] The Securities And Exchange Board Of India Act, 1992, sec. 15 HA.

[33] ‘Sebi issues ₹25cr notice to Karvy Broking, CMD’ Times of India (Delhi, 08 August 2024) <https://timesofindia.indiatimes.com/city/delhi/sebi-issues-notice-to-karvy-broking-for-misusing-clients-funds/articleshow/112357580.cms dvv>  accessed 9 April, 2025.

[34] The Bharatiya Nyaya Sanhita, 2023, sec 318.

[35] Supra note 31, sec 66.

[36] SEBI, Safer Participation of Retail Investors in Algorithmic Trading, SEBI/HO/MIRSD/MIRSD-PoD/P/CIR/2025/0000013 

[37] SEBI, Interim Order in the matter of Index manipulation by Jane Street Group, WTM/AN/MRD/MRD-SEC-3/31516/2025-26, (4 Feb, 2025) <https://www.sebi.gov.in/legal/circulars/feb-2025/safer-participation-of-retail-investors-in-algorithmic-trading_91614.html> accessed 20 August 2025.

 

[38] Bryan H. Choi, 'Negligence Liability for AI Developers' (Lawfare, 26 September 2024) <https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers> accessed 10 April 2025.

[39] Alina Glaubitz, 'How Should Liability Be Attributed for Harms Caused by Biases in Artificial Intelligence?' (Senior Thesis, Yale Department of Political Science, 29 April 2021) <https://politicalscience.yale.edu/sites/default/files/glaubitz_alina.pdf>accessed 10 April 2025.> accessed 10 April 2025. See Emily Black and others, 'The Legal Duty to Search for Less Discriminatory Algorithms' (10 June 2024) arXiv preprint <https://arxiv.org/abs/2406.06817>accessed 10 April 2025.

Comments


RAJIV GANDHI NATIONAL UNIVERSITY OF LAW, SIDHUWAL BHADSON ROAD, PATIALA, PUNJAB - 147006
ISSN(O): 2347-3827

Untitled_design__4_-removebg-preview_edi
Connect with us :
  • Instagram
  • LinkedIn
bottom of page