Searchline. Let us do the hunting whatever expert you need. Please call our free SearchLine today on 0161 834 0017

Journal Detail back to listing

ICT Forensic Expert Witnesses and Disputes over Automated Decision Systems
  • Apr 28, 2022
  • Latest Journal

By Dr Stephen Castell 1

Dr Stephen Castell, award-winning ICT systems and software consultant professional, and FinTech visionary, active as an international expert witness in major complex computer software and systems disputes and litigation, including the largest and longest such actions to have reached the English High Court.

1. Introduction
There is a rapidly increasing use of Artificial Intelligence (AI) and Machine Learning in the deployment of Automated Decision Systems (ADS) in social, employment, legal, business and economic administration, in both the public and private sectors.  Computer software-implemented algorithms, or ‘algos’, are spreading across a wide range of expanding application areas.  As the demand for AI and Machine Learning expertise relentlessly grows across all industries, sectors and practices, professionals will inevitably find themselves needing to assess more closely the ‘legal and social (re)liability of AI’ [Castell (2021b)].  Disputes and litigation over the use, and the damaging consequences of the use, of ADS are likely to be a growing feature of ‘algo’ social and professional life, in business and in government, going forward, and I suspect that ICT expert witnesses are going to become involved, to one extent or another, in assessments of such disputes, whether in the Criminal or Civil Courts, and/or before other Tribunals.

2.  Expert Experience of an ADS case: Investor v Fund Manager
I was recently engaged as expert witness, and gave sworn testimony, in a Financial Industry Regulatory Authority (FINRA) Arbitration hearing in Massachusetts, USA.  The case was a dispute over use of an Automated Decision System by a major US fund management corporation to close-out the investment trading position of a client, allegedly negligently, with heavy USD losses to its client.  I set out below a shortened, sanitized and anonymized version of my testimony material, but otherwise essentially verbatim.

The technical issues at the heart of the case were:
• What ‘algos’, programmed trading, or AI software did the fund management corporation use?
• Whether or not the fund management corporation did use such ADS, did it anyway fail to use ‘reasonable professional skill, care and diligence’ in its (necessarily software-assisted) management/expert judgement, decision and execution of trade close-outs on behalf of a client to whom it arguably owed a fiduciary duty ‘to hold harmless’?

Answers to Attorney’s Questions posed to Dr Stephen Castell, sworn and under Examination in Investor v Fund Manager – US Arbitration
Q1)  Please state your name and occupation
My name is Stephen Castell.  I am an independent computer software and systems consultant and expert professional, operating through my own company founded in 1978 and known as Castell Consulting.

Q2)  What is your background in this field and how many years of experience do you have?
I have bachelor’s and master’s degrees in mathematics, physics and computer science, and a doctorate in mathematics, plus Chartered Membership of Mathematics, Physics, Computer, Management and Expert Witness Societies, Associations and Institutes.  I have forty years of training, management and business experience in computer and communications consultancy, in a wide range of sectors, including financial services, and as a senior IT and corporate executive of a London boutique investment bank.  I have been interviewed for Archives of IT:

Q3)  Have you provided testimony in legal proceedings?
Yes, many times.

Q4)  Has your testimony been for both the plaintiff side as well as the defense side?  Have you testified in American federal court?
Yes, testimony for both plaintiffs and defendants, in several jurisdictions, including in American federal courts.

Q5)  Who are some of your most high profile clients?
HM Treasury – a foundational research study of the legal security and reliability of computer software, systems and media, carried out for the five principal UK Departments of State, published as The Appeal Report, 1990.
GEC-Marconi – GEC-Marconi v LFCDA, 1991-93; multi-million dispute over ‘functionality extras’ in the development of the London Fire Brigade Mobilising System, the longest software contract case – over a year – to be heard in the English High Court.
Misys plc – AVCC v CHA, 1997-98, Sydney Supreme Court; multi-million Australian Universities administration automation system procurement dispute (eventually settled at a Mediation under Sir Laurence Street).
Airtours plc (now MyTravel plc) – Airtours v EDS, Claim No. HT00/000305, English High Court (Queen's Bench Division - TCC), 2001; high-profile largest computer software and outsourcing contract action to come to trial in the English High Court (£200m claim; £50 counter-claim).
DirecTV – United States District Court, Eastern District of Texas Beaumont Division, Civil Action No. 1:05-CV-0264, 2005; Prior Art research and testimony defending a multi-million infringement action concerning US Patent No. 5,404,505.
UK and International Banking Systems Supplier:  Canadian Arbitration, Toronto, 2006-2007; dispute over a major systems contract/project failure, between a leading banking group’s Lending Division and one of the world`s principal software and systems suppliers of banking systems.
Sempra Metals – Claim No HT-05-366, English High Court (TCC), 2006-2007; legal action between a leading City metals trader and a specialist front-to-back commodities trading and back-office software package supplier.
ERG Ltd / Videlli Ltd – PTTC v ERG, 2010-2012, NSW Supreme Court, Australia; very high-profile IT systems contract dispute over the failed ‘TCard’ Integrated Ticketing and Transport System project, involving a claim for AU$90m, with a cross-claim for AU$200m+.
Kaspersky Lab – Lodsys v Kaspersky, 2012-2013, Texas Court; Prior Art research and critical testimony for multi-million high-profile ‘patent troll’ US Patent Dispute.
Permanent Court of Arbitration, ICC Paris – Technical Expert to Arbitral Tribunal, 2017-2018; a $0.5bn dispute between one of the largest US Global Corporations and a Sovereign Sate.  Data forensics investigation in regard to authentication of circulation and signing of a key electronic document.
Leading Financial Real-Time Markets Trading, Dealing and Administrative Systems Supplier – multi-million dispute with major international Swiss-based investment banking group over alleged faults in ‘algo trading’ software system supply, 2021; settled prior to action after provision of my report assessing presence of ‘software material defects’.
US Attorneys for Plaintiffs in multi-million Cassidy v Voyager Class Action – cryptocurrency trading and services company, misrepresentations of software functionality and investment performance (UNITED STATES DISTRICT COURT SOUTHERN DISTRICT OF FLORIDA CASE NO. 21-24441-CIV-ALTONAGA/Torres), filed December 2021.

Q6)  Do you have any stock or other holdings in Fund Manager?  

Q7)  Are you familiar with computer algorithms and automated systems?
Yes.  All computer software applications are fundamentally constructed of, and implement, algorithms, providing functionality that meets defined systems Requirements, for varying degrees of automation.

Q8)  Are you also familiar with automated decision systems which involve a combination of human and machine in the decisional process?
Yes.  All automated decision systems, implemented in and as computer applications software, are essentially under the management and governance of, humans, so that they necessarily involve a combination of human and machine in the decisional process.  There may routinely be a high degree of autonomous decision-making operationally, in real-time, by the machine, with little, or no, human intervention needed.  However, ultimately humans are practically responsible for the decisions taken by, and liable for the consequences of, those automated decision systems.

Q9)  How much may algorithms and automated systems be useful in the financial industry?  (or for a financial institution)
The financial industry has invariably been one of the most demanding of such systems, with a steady appetite for advances in technologies and techniques implementing and providing increasingly automated processes, more complex algorithms, faster decision-making, enhanced ‘big data’ processing and analysis, greater efficiency in and reduced costs of trade and transaction execution and confirmation and, ultimately, improved certainty, security, quality and scale of financial returns and profits.

Q10)  In your experience, have you dealt with cases where you have a computer system that is supposed to serve two interests that are diametrically opposed to one another, i.e. different financial stakes in the subject matter?
(i)  In my professional experience, every computer system is to be conceived, designed, specified, built, operated and managed to meet certain defined Requirements.  Those Requirements may certainly involve or imply delivering functionalities that endeavour to serve interests that are diametrically opposed to one another.  For example, the very first commercial computer systems were built to automate the accounting and bookkeeping functions within enterprises: on the one hand, such systems served the interests of the executives and owners of the enterprise by cutting costs, enabling business expansion without increasing administrative resources, providing an improved service to customers, and reducing headcount, all with a bottom-line increase in sales turnover and net profits.  On the other hand, such systems served the interests of employees who, despite being diametrically opposed to increased work load (often for no increase in pay), and the threat of redundancy through reduced headcount, nevertheless enjoyed greater technology upskilling, with a bottom-line improvement in their individual job security and opportunities, career development, quality of livelihood, and financial compensation.

(ii)  As Group Management Services Manager (CIO) for Bremar Holdings Ltd, International Investment Bankers, in the mid-1970s, I personally designed algorithms and implemented computer systems that not only enhanced the efficiency, financial performance and profits of the bank, but also provided improved services, opportunities and profits to clients of the bank.  For example, for Bremar’s core Eurocredit and Eurodollar Trading operations, I developed and coded a ‘banking paper’ bid-and-offer non-linear programming model and algorithm, for use in the daily sales negotiation activities of Bremar’s Traders.  My model and algorithm had functionality that took account of variables such as volatility, type of option, underlying paper price, timing ‘rests’ of interest payments, strike price, and forward rates, assisting traders determine the fair bargain price for a call or a put option (the 1997 Nobel Prizewinning Black–Scholes Model, which my work pre-dated by over twenty years, essentially employed the same algorithm).  Use of the algorithm enabled negotiation of an informed bid-and-offer-driven sale transaction price that optimised the profits for both Bremar and its counter-parties – i.e. it was a ‘win-win’ algorithm, for both the seller, and the buyer, of the traded ‘paper’, where these parties are usually seen as having diametrically opposed interests.

Q11)  What part does human judgment play in relation to today's sophisticated computer systems and algorithms?
As noted earlier, there may be a high degree of autonomous decision-making operationally by today’s sophisticated computer systems and algorithms, with little human intervention needed for their operation.  However, in my experience, and as a matter of professional practice, ultimately human judgment is always responsible for the decisions taken by, and liable for the consequences of, those automated decision systems.

Q12)  We are hearing in the news of algorithms and concerns about bias (e.g. in the job application setting, and electronic communications platforms).  So can there be a built-in bias in algorithms?
The issue of ‘bias’ in algorithms in the news and social and other media can in my view be amateurishly conceived and expressed.  The fundamental principle of professional software construction and delivery is that every computer system, i.e. every implementation of one or more algorithms, is to be conceived, designed, specified, built, operated and managed to meet certain defined Requirements.  Someone has, or had, to define those Requirements, ‘own’ them, be responsible – and liable – for them: we talk in professional terms of their being an identified Requirements Authority.  Thus, whatever is conceived, defined and detailed, by humans, within the Requirements Specification, is the functionality that the algorithms, the computer software, is intended and due to deliver.

Essentially the only evaluative judgment that therefore falls to be made about the eventual delivered and operated software system, i.e. the executable algorithms, is the objective assessment as to whether or not the system meets, i.e. is materially compliant with, its defined Requirements.  When the system does materially comply, we judge and say that the software system and its implemented functioning, operable algorithms are ‘of sufficient quality and fit for purpose’.  There is here no meaning, or place, for evaluation of subjective allegations of ‘bias’.

It may be that some other party takes the personal subjective view that the purpose or consequences of said algorithm demonstrates ‘bias’; but that view can have no place methodologically or professionally (or, as I have often been advised by Learned Counsel, legally) in judging the system’s fitness for purpose.

If there is any ‘bias’ to be alleged or assessed in any computer software-implemented algorithm, then it is not to be looked for in the algorithm (which would be meaningless), but in the process by which and by whom the Requirements for the functionality of the software, for the purpose and operation of the algorithm, were conceived, defined and specified.  As I put this truth in a recent paper:
Castell’s Second Dictum: “You cannot construct an algorithm that will reliably decide whether or not any algorithm is ethical”  [Castell (2018)].

Q13)  In this case one of the issues is whether, in a liquidation following a margin call, whether there was a significant departure from the margin deficit in making such a high liquidation.  What information would you like to see to get to the bottom of what happened?
Irrespective of the subjective issue of possible ‘bias’ in the Requirements, there may always be software defects, deficiencies, intermittent operating faults etc in the system – ‘bugs’.  Taking this into account, the information that would in my experience need to be provided and examined in order to investigate as to whether or not there was “a significant departure from the margin deficit in making such a high liquidation” includes:
• The Requirements Specification of the System.
• The Software Development Records.
• The System Operational Records (including fault logs, incident reports/tickets etc).
• Materials pertaining to the particular ‘margin deficit’ and ‘liquidation’ incident parameters at issue – identification of the specific software code/algorithm functions where the relevant ‘margin deficit’ and ‘liquidation’ processing and decisions were executed in the System; details of like and surrounding trades (to check for patterns, consistencies, anomalies etc); applicable market data upon which the decision functionality was conditioned and/or relied.
• The Management, Technical and User Guides for the System.

Q14)  Advanced as they are today, could an algorithm be designed that would not only take into account the ability of Fund Manager for example to maximize profit or protect profits in a volatile market, but also identify promising stocks that are swimming against the grain?
Yes.  Algorithms can in principle be designed to do anything – they are only limited by the intelligence, imagination and experience of their conceivers, the skill of their software coders and the capabilities of the available technologies and resources.

For example, my own consultancy defined, designed and built a real-time commodities, OTC, derivatives and futures programmed-trading, mid-office, investor-handling and administration system for a commodity-trading entrepreneur client.  Based on high-quality thinking, proprietary economic models and mathematical techniques, and using sound charting tools and quality data analytics, it delivered when launched dealing gains for clients of, typically, 20% per month (sic), with an equally successful unique dynamic stop-loss downside-risk-limiting feature.

Q15)  Have you read the ‘Statement on Algorithmic Transparency and Accountability’ of the Association for Computing Machinery, US Public Policy Council (USACM)?  Is it possible, as it says, for well-engineered computer systems to have unexplained outcomes or errors?  Why?
Yes [Association for Computing Machinery US Public Policy Council (2017)].  As said earlier, irrespective of the subjective issue of ‘bias’ in the Requirements, there may always be software defects.  There are many reasons for these, range from inadequately defined, detailed or documented Requirements, inappropriate or poor choice of design, and badly project managed construction and/or unsuitably skilled and experienced software programmers, to deficient or incorrectly planned or executed testing, faulty installation, deployment or implementation, and insufficiently reliable operational maintenance and update.

And there is also the reality of the ontological unreliability of software: computer science experts well know that, as a result of Gödel's Incompleteness Theorem: ‘The only thing that can be said with certainty about software is that it is definitely uncertain’.

Q16)  Are algorithms advanced to the stage where companies are able to quickly change them in a rapidly changing business environment?  Would you expect that to be the case for a market actor such as Fund Manager?
Yes; and yes.  However, the capability for rapid, business-reactive code changes and software re-versioning, re-purposing, re-testing and re-deployment has to be ‘designed-in’ from the start.  In my experience, it would be surprising if a market-leading financial institution like Fund Manager did not essentially have this embedded capability designed-in to its systems, to one extent or another.

Q17)  And are these changes made by highly specialized individuals such as yourself or are systems at the point now where they can learn to make the changes without human involvement?
There is increasing interest and research in, and trialling of, ‘self-learning’ computer programs, but they so far have relatively limited proven application, mostly within the software coding industry itself.  Changes in serious-scale commercially deployed systems are still for the most part made by highly specialized individuals, IT professionals.

See for example:
How AI Is Making Software Development Easier For Companies And Coders Feb 5, 2020.  Artificial intelligence is the result of coding, and now coding is the result of artificial intelligence. Yes, AI has come full circle, because more companies and more coders are using it to aid the software development process.

Q18)  Are there benefits to the public of having some level of transparency of algorithms in the financial industry?  What suggestions, if any, do you have on this subject?
This is an interesting subject, and part of the wider debate about independent oversight and monitoring of (the Requirements for) algorithms, particularly as regards ‘Government by Algorithm’.  This is a something that I have explored in my recent learned journal paper [Castell (2021b)], giving some of my own innovative and professional suggestions.

In the financial industry there is already a level of transparency in regard to regulatory oversight – for example, audit by/reporting to regulators of systems compliance with KYC, AML, MIFID, MIFIR etc rules and protocols.

One of the major issues that I can see with greater ‘transparency’ would be the commercial confidentiality, and the ‘proprietary edge or advantage’, of the algorithms, which their proprietor financial institutions would, one expects, wish fiercely to protect and preserve.  They would probably also argue that imposing wider transparency would reduce the motivation of enterprises within the industry to develop new, improved algorithms, and constrain overall competitiveness in the industry – neither of which would be of benefit to the public, their customers.

I suspect that the type of case above, Investor v Fund Manager, derived from my own recent experience, and the sort of issues raised therein, are increasingly going to feature in the financial investment world – for example, in regard to people’s pension funds and their management – as AI and ADS relentlessly ‘take over autonomously’ in financial servicers and, indeed, in all other sectors.  Furthermore, recent high-profile examples of software failures and associated disasters and tragedies, such as VW Dieselgate, Boeing 737 Max, and PO Horizon, serve to point up the critical issues that can only escalate as widescale software implementations, including ADS, become more deployed and firmly entrenched [Castell (2021b), (2020), (2021c)].

Care should be taken professionally when the subjective issues of ‘bias’ or ‘ethics’ in algorithms are raised.  It should be made clear to instructing lawyers and the courts that they must properly look for review of the subjective concepts of ‘bias’ and ‘ethics’ in the processes and protocols of the humans who specified those Requirements.  They should not expect to find any technical evidence thereof in the computer code itself.

Duly-diligent forensic ICT Professional expert investigation of such cases must also guard against the incorrect ‘presumption of the reliability of computer evidence’ that worryingly seems to have crept into pleadings brought before some courts, particularly in Criminal Cases, and to have been accepted unchallenged by presiding judges. [Castell (2021a)].

Association for Computing Machinery US Public Policy Council (USACM), 2017, “Statement on Algorithmic Transparency and Accountability”, 2 pages, January 12, 2017.

Castell, S., 2021a, “A trial relying on computer evidence should start with a trial of the computer evidence”, OPINION By Stephen Castell, 22 Dec 2021, “Learning from the Post Office Horizon scandal: ‘The most widespread miscarriage of justice in recent British legal history’ …”, Computer Weekly.

Castell, S., 2021b, “Direct Government by Algorithm. Towards Establishing and Maintaining Trust when Artificial Intelligence Makes the Law: a New Algorithmic Trust Compact with the People", Acta Scientific Computer Sciences 3.12 (2021): 04-21, November 10, 2021.

Castell, S., 2021c, “Slaying the Crypto Dragons: Towards a CryptoSure Trust Model for Crypto-economics”, 25 March 2021.  Chapter in Patnaik S., Wang TS., Shen T., Panigrahi S.K. (eds) Blockchain Technology and Innovations in Business Processes. Smart Innovation, Systems and Technologies, vol 219. Springer, Singapore, pp 49-65.

Castell, S., 2020, “The Fundamental Articles of I.AM Cyborg Law”, Beijing Law Review, Vol.11 No.4, December 2020.  DOI: 10.4236/blr.2020.114055.

Castell, S., 2018, “The future decisions of RoboJudge HHJ Arthur Ian Blockchain: Dread, delight or derision?”, Computer Law & Security Review, Volume 34, Issue 4, August 2018, Pages 739-753.


Dr Stephen Castell CITP CPhys FIMA MEWI MIoD
Chairman, CASTELL Consulting
PO Box 334, Witham, Essex CM8 3LP, UK
Tel: +44 1621 891 776        Mob: +44 7831 349 162