An interesting article I wrote that appeared on the IAP’s website this week. It explores the use of technology in elections and the vulnerabilities it exposes.
-
National TV Awards – MR BATES VS THE POST OFFICE
This is an article I wrote about the ongoing scandal and how the docu-drama won the award at the NTVA. Article first published on The Institution of Analysts and Programmers website.
-
THE MILLENNIUM BUG: 25 YEARS ON
A Tale of Hype, Herculean Efforts, and Successful Outcomes (in the main)
INTRODUCTION
As the clock ticked closer to midnight on December 31, 1999, the world braced itself for a technological apocalypse known as the Millennium Bug, or Y2K. This looming threat was predicted to disrupt computer systems worldwide, leading to catastrophic failures in everything from financial systems to power grids. The hysteria in the media surrounding Y2K was intense, resulting in a massive, coordinated global effort to prevent the feared outcomes. Yet, as the new millennium dawned, the expected chaos failed to materialize, leaving many to wonder: was the threat real, or was it all just hype?
THE ISSUE: WHAT WAS THE MILLENNIUM BUG?
The Millennium Bug, or Y2K, was in reality two seperate problems;
The first stemmed from a seemingly innocuous programming shortcut used in the early days of computing. To save valuable memory space and money, programmers represented years with only two digits (e.g., 1970 as “70”). This practice worked well until the year 2000 approached. The fear was that systems would interpret the year “00” as 1900, potentially causing errors in date-sensitive operations across a myriad of systems, including financial transactions, power grids, and government databases.
The second was a specific technical issue involved PC BIOS dates, where many older computers’ Basic Input/Output System (BIOS) would not correctly recognize the year 2000, potentially causing PC startup failures. Chris Myers a Fellow of the Institution, wrote this technical document back in 1997, to explain the hardware issues.
THE HYPE: THE WORLD ON EDGE
The potential consequences of Y2K were widely publicized, creating a global frenzy. Media outlets predicted widespread chaos: banks would fail, airplanes would fall from the sky, and essential services would grind to a halt. Governments, businesses, and individuals prepared for the worst. Companies spent billions of dollars on remediation, and contingency plans were made to address possible disruptions. Some people even hoarded supplies, fearing that basic utilities might fail.
While the World woke up to the idea of the Millenium Bug during 1999, in preperation for the potential downfall of society at 1 minute past 12 am on the 1st of January 2000, the problem was already a threat on the 1st of January 1999. Many systems, for example motor/house/life insurance and banking systems, already were entering data where the next renewal would be in the year 2000, or 00, if the system was not ready, the potential for failure was high.
THE WORK: THE GLOBAL EFFORT
In response to the impending threat, a monumental effort was launched to fix and upgrade systems. The work involved several key steps:
1. *Inventory and Assessment* : Identifying systems, hardware and software that were potentially vulnerable.
2. *Remediation*: Updating or replacing hardware and software to handle the date change correctly, including updating PC BIOS to ensure proper date recognition.
3. *Testing* : Rigorous testing to ensure that changes were effective and did not introduce new problems.
4. *Contingency Planning* : Developing backup plans to address potential failures.
This colossal undertaking saw cooperation across industries and borders, involving governments, private sector companies, and international organizations. The UK alone spent over £20 million and the United States is estimated to have spent over $100 billion on Y2K preparations.
SPECIFIC EXAMPLES: UK & WORLDWIDE
In the UK, the government launched the Action 2000 initiative, led by the respected business leader Don Cruickshank. This initiative aimed to ensure that both public and private sector organizations were prepared for Y2K. The National Health Service (NHS), for instance, invested significantly in ensuring that its systems, including patient records and medical devices, would not fail due to Y2K-related issues.
One notable global example is the work done by the financial sector. The New York Stock Exchange and NASDAQ underwent extensive testing and upgrades to prevent trading disruptions. Similarly, in Japan, the government and private sectors collaborated to ensure that essential services, such as banking and utilities, were safeguarded against potential Y2K failures.
Some IAP members reported how busy they were while others said it was quiet, or even mundane.
THE OUTCOME: MINOR GLITCHES, NO MAJOR FAILURES
As the clock struck midnight on January 1, 2000, the world held its breath. But rather than the predicted chaos, the transition to the new millennium was surprisingly smooth. Minor glitches occurred, but nothing on the scale of the anticipated catastrophe. Some examples of minor issues included:
1. *United States* : A few slot machines in Delaware stopped working temporarily, and in Washington, D.C., a couple of spy satellites experienced minor data hiccups.
2. *Japan* : Some minor glitches were reported, such as errors in radiation monitoring equipment, which were quickly resolved.
3. *Australia* : Bus ticket validation machines in two cities failed to operate correctly for a few hours.
4. *UK* : A few credit card transactions went awry.
5. *South Korea* : Feel sorry for the 170 people sent court summons, to attend on the 4th of January 1900.
Overall, these minor incidents were quickly addressed, and no significant disruptions were reported.
ANALYSIS: WAS IT WORTH IT?
In hindsight, the smooth transition can be seen as evidence of the success of the massive remediation efforts. Without the extensive preparations, the outcome might have been very different. The lack of major incidents was not because the threat was imaginary, but because of the proactive measures taken to address it.
Furthermore, the Y2K preparations had several positive side effects:
*Modernization* : Many outdated systems were updated or replaced, leading to more efficient and reliable operations.
*Increased Awareness* : The event raised awareness of the importance of maintaining and updating critical infrastructure.
*Preparedness* : Organizations developed better contingency planning and risk management practices.
CONCLUSION
The story of the Millennium Bug is a testament to the power of coordinated global action in the face of a common threat. While the anticipated chaos did not materialize, the extensive preparations undoubtedly played a crucial role in ensuring a smooth transition into the new millennium. Y2K serves as a valuable lesson in the importance of vigilance, preparation, and collaboration in managing technological risks. In the end, the Millennium Bug was not a catastrophe, but a catalyst for improvement and modernization.
THE FUTURE
While researching for this article, one of our Fellows, Irene Jones, mentioned EPOCH time. This was used on older UNIX based systems and was used to store the number of seconds that have elapsed since 00:00:00 January 1st 1970. It will also have a similar crisis moment in January 2038. Also known as UNIX time, on the 19th of January 2038, any 32 bit signed Integer storage fields used to store the time will overfill. The solution is of course, to replace the fields with 64 bit fields, but who knows what dragons may exist in some of these older systems. It may seem a way off, but some serious system changes may well be predicted in the future.
-
D-DAY 80 YEAR ANNIVERSARY
On 6th June 2024, on the 80 year anniversary of D-Day, events were held across the UK to commemorate the largest seaborne invasion in history; a mission that marked the beginning of Western Europe’s liberation in the second world war.
Here’s an overview of how computing helped at this critical time.
COMPUTING IN THE 1940s AND IT’S ROLE IN D-DAY: The 1940s marked a significant period in technological advancement, particularly in computing. This era, defined by World War II, saw the development and use of early computers that played critical roles in various military operations. One of the most notable events where computing technology made a significant impact was during the planning and execution of D-Day, the Allied invasion of Normandy on June 6, 1944.
EARLY COMPUTERS & THEIR CAPABILITY: In the early 1940s, the concept of digital computing was in its infancy. The computers of this era, such as the British Colossus, were rudimentary by today’s standards but revolutionary at the time. These machines could perform calculations at speeds unattainable by humans and were crucial in processing large volumes of data quickly.
COLOSSUS: Developed by British engineer Tommy Flowers, Colossus was designed to break German encryption, specifically the Lorenz cipher used by the German High Command. Its ability to decipher encrypted messages allowed the Allies to gather crucial intelligence.
CODEBREAKING & INTELLIGENCE: One of the most critical contributions of computing to D-Day was in the field of codebreaking. The British Government Code and Cypher School at Bletchley Park, home to the Colossus computer, played a pivotal role in deciphering German communications.
DECIPHERING THE LORENZ CIPHER: The Lorenz cipher, used by the German High Command, was more complex than the Enigma cipher. Colossus, which became operational in late 1943, was instrumental in breaking this cipher. The intelligence gleaned from these decrypted messages provided the Allies with insights into German troop movements, defensive strategies, and overall military planning.
OPERATIONAL SECURITY: Understanding German communications allowed the Allies to implement effective countermeasures and deception strategies, such as Operation Fortitude, which misled the Germans about the actual landing site of the invasion.
WEATHER FORECASTING: Another critical aspect of D-Day planning was in weather forecasting. The success of the invasion was heavily dependent on favorable weather conditions.
METEOROLOGICAL CALCULATIONS: Meteorologist Group Captain James Stagg, who advised General Eisenhower, relied on observations (from a ship several hundred miles off the West coast of Ireland), discussions with colleagues and his intuition rather than computational aids to predict a brief window of good weather, which ultimately determined the timing of the invasion. Today, satellites and bouys in the ocean, give far more accurate weather predictions (sometimes!).
LOGISTICS & PLANNING: The sheer scale of Operation Overlord, the code-name for the Battle of Normandy, required meticulous planning and coordination. Most of the planning was manual, but picking a time to land, was down to a simple computation device that calculated tide tables for high and low water.
CONCLUSION: The computing technology of the 1940s, while primitive compared to modern standards, played a crucial role in the success of D-Day. The integration of these technologies into military operations not only contributed to the success of D-Day but also laid the groundwork for the development of modern computing and its applications in various fields.
The biggest contribution to D-Day, was given by the 130,000 men on the the day, nearly 5,000 of whom died on the landing areas. This allowed nearly one million men over the next few weeks to land in France and go on to liberate Europe of the evil that had plagued if for many years.
-
Y2K. JOHN’S STORY
Blood, Sweat and Years. The Y2K remembered
It only seems a few years ago when the world was awash with how the all the IT systems, PC’s and Game Consoles were all going to stop on the 1st of January 2000.
Companies were expecting to spend millions on testing and fixing their systems, Government were preparing for everything from nuclear reactors to our water systems just stopping and the world ending.
HARBINGERS OF DOOM EVERYWHERE
I was primarily working for a large insurance company that supplied quotation systems to high street Independent Financial Advisors (IFA’s). I was managing a household insurance quotation system that interfaced to a early CRM system which we also wrote, we also had a Motor Insurance system and a Business Insurance system, managed by different teams within the company.
So one day in early 1999, I was asked to assess what was required to be done to make the systems Year 2000 compliant and implement the changes and get it tested. I was given six months to do this.
As I looked at 100’s of thousands of code written in COBOL, I realised this would be a very big job; Then I had an idea, the code was effectively all in text files, which I could read in code.
So I wrote a program to read through all the source code files, find likely candidates for date fields and report on them. This went very well, then I realised that I could use the same code and get it to alter much of the existing code automatically, and just list what I needed to check.
Instead of 6 months, I had all the changes done in 6 weeks, unit and system tested in another couple of weeks and out to our integration team to test. In the end, my changes were rolled out to the IFA’s in just over 4 months from starting the project.
From my own recollection, while the Y2K problem was quite a potential issue, the dedication of so many in the industry, meant that on the 1st of January 2000, pretty much everything worked without a hitch. Many people today, say it was a massive hype, but it wasn’t, it was down to a lot of people working very hard to ensure nothing went wrong.
-
Tags
Tagging: From Web 2.0 Buzzword to Powerful Organization Tool
In the ever-expanding world of digital information, tags have become an essential tool for organization and discovery. But how did these simple labels evolve, and how do they power the way we interact with data today?
The concept of tags has roots in the early days of information management. In libraries, librarians meticulously categorized books using Dewey Decimal or Library of Congress systems. In computing, tags emerged as a way to describe files and data structures. However, the rise of Web 2.0 in the early 2000s truly democratized tagging. Platforms like Flickr, a photo-sharing site, and Delicious, a social bookmarking tool, allowed users to assign their own keywords, or “tags,” to photos, articles, and bookmarks. This folksonomy, or “classification by the people,” revolutionized how content was categorized and discovered.
Imagine a user uploading a photo on Flickr of their birthday party. Instead of relying on a pre-defined category system, they could add tags like “birthday,” “friends,” “celebration,” “cake,” and maybe even the specific location like “Pizza Planet.” This not only helps them find the photo later but also allows others searching for birthday party photos, friends’ gatherings, or even Pizza Planet events to discover the image.
Tags offer several advantages over traditional, top-down classification systems. They are:
- User-driven: Tags reflect the natural language users employ, making them more intuitive and discoverable. For instance, a user might tag a funny cat video as “hilarious,” “cats,” “lolcats,” whereas a more formal system might categorize it under “Pets” or “Animals.”
- Flexible: Tags can be specific or general, allowing for nuanced categorization. A recipe could be tagged with “vegetarian,” “Italian,” “pasta,” and “quick meal,” providing multiple avenues for users to find it.
- Collaborative: In social settings, users can add tags to each other’s content, fostering shared understanding. On a photo-sharing platform, friends might add the missing tag “beach” to a photo someone uploaded from their seaside vacation.
Tags have become a cornerstone of data management across various fields. Here are some specific examples:
- Digital Libraries: Libraries leverage tags to categorize articles, ebooks, and other digital resources. An academic paper on climate change might have tags like “global warming,” “environment,” “sustainability,” allowing researchers to discover relevant information more easily.
- Photo Management: Photo management software allows users to tag photos with keywords like location, event, or people. Imagine tagging a photo from your Paris trip with “Eiffel Tower,” “France,” “vacation,” and the names of your travel companions. Years later, searching for any of these tags will instantly bring up the photo.
- Music Streaming: Music streaming services use tags to categorize music by genre, mood, or activity. A song might have tags like “electronic,” “dance,” “workout,” enabling users to discover music for specific situations. Whether you’re looking for upbeat party tracks (“dance”) or relaxing background music (“chill”), tags help narrow down the search.
- Social Media: #Tags are used on almost every social media platform, from FaceBook to LinkedIn to Twitter (X). Tagging the subject, the author, the company, specific interests of the post etc. Users can be whisked into the wider world where #Tags have been used in different scenarios for different things. Click #Cyber and you may get internet security, Doctor Who or Punk music.
The Serendipitous Journey of Tags
However, the true power of tags lies in their ability to take you down unexpected paths. Imagine researching the history of robotics for a school project. You start with articles tagged “robots” and “artificial intelligence.” But then you stumble upon a blog post tagged “robots” and “Isaac Asimov,” the science fiction author famous for his “Foundation” series. Intrigued, you delve deeper, discovering a whole new layer of thought about the potential future of robotics inspired by Asimov’s fictional universe. This is the serendipitous journey that tags can enable, where a simple keyword opens doors to entirely new areas of exploration.
This concept resonates with the overarching theme of Asimov’s Foundation series. In the series, a vast library known as the Seldon Plan serves as a repository of human knowledge, meticulously categorized for future generations. However, the true value of the Plan lies not just in the information itself, but in the way it can be interpreted and reinterpreted, leading to unforeseen consequences and shaping the course of galactic history. Just like the Seldon Plan, tags offer a framework for organizing information, but it’s the user’s exploration and the connections they forge that unlock the true potential of this knowledge.
The Future of Tags: AI and Beyond
The future of tags with AI goes beyond just suggesting basic tags or locations. Here’s how AI might revolutionize tagging:
- Automatic Tagging: AI could analyze content in more depth, automatically assigning not just basic tags but also complex concepts. Imagine uploading a scientific paper. AI could not only tag it with “physics” and “astrophysics” but also identify specific subfields or even groundbreaking theories discussed within the paper.
- Personalized Tags: AI could personalize tags based on user preferences. For music streaming services, AI might suggest tags based on a user’s listening history, recommending similar artists or genres they might enjoy.
- Evolving Tags: Tags could become dynamic, evolving as content is consumed and interacted with. Imagine a news article about a developing situation. As new information emerges, AI could update the tags to reflect the latest developments, ensuring users have access to the most up-to-date information.
However, the rise of AI in tagging also presents challenges:
- Bias: AI algorithms can inherit biases from the data they are trained on. This could lead to skewed or inaccurate tags, requiring careful monitoring and mitigation strategies.
- Over-reliance: Overdependence on AI-generated tags could stifle human creativity and critical thinking in the tagging process. Finding the right balance between AI assistance and human expertise will be crucial.
In conclusion, tags have come a long way from their humble beginnings as simple keywords. They have become a powerful tool for organization, discovery, and even serendipitous exploration. As AI continues to develop, the future of tags promises even greater levels of automation, personalization, and dynamic information management. However, it’s important to navigate this future with a critical eye, ensuring AI complements rather than replaces human judgment in the crucial task of tagging information.
-
ONE WINTER’S NIGHT
John Ellis FIAP, recalls another computer operator ‘adventure’.
Back in the early days my principle weapons of choice were ICL (2904, 2946, ME29, 3900’s) and Honeywell (62/40, 62/60) Mainframes. Around the mid 80’s I was working with a food wholesaler and retailer on site as support programmer and occasional computer operator, on an ICL ME29 with a stock and order processing system. No Sage accounting then.
We had a small PC based system (not IBM) attached to the mainframe. Stores would scan their orders in onto a Micronics Terminal, transmit them to us in the evening and we would update the orders on the mainframe systems.
One evening, while it was snowing outside (rare for the South coast of England at that time) the mainframe suddenly died. We were getting all sorts of alarm alerts but after a few attempts to reboot the system we had to give up and call the ICL engineer. It took him around 2 hours to get to us. When he arrived he spent around an hour investigating and scratching his head and eventually said: “I need to call out another engineer with a full set of boards for this computer. He will be here tomorrow around 6am”.
This was not good. We had to send picking lists to the warehouse by 2am so as produce could be on the lorries by 6am, ready to deliver to the shops.
Thankfully I’d written a rudimentary stock system (an off-the-books exercise) that could take the Micronic Terminal data, in order to use a stock held on the disk and produce a picking list. Hooray I here you say. BUT! This was a ‘what if the computer room was destroyed’ system. The system was written in MicroFocus COBOL and ran on a floppy disk. The other problem was that the emergency floppy disk system was not on site. It was at my home some eight miles away.
Accompanied by another computer operator, I got into my car (a 1973 Morris Oxford) and we headed to my flat, in the snow. Thankfully the roads were empty. Returning to the office through four inches of snow (roundabouts were fun) – no anti-skid or traction control then. It took about an hour to get back to the office.
Disk into PC – orders put into system – picking lists produced.
Your would think this would be the point where people would be grateful. If only! The complaints by the warehouse staff about the picking lists not being sorted by location in the warehouse!
The mainframe was down for four days. After replacing every card to no avail it turned out to be a heat sensor that was thinking the mainframe had overheated. Although we had state of the art air filters for the computer room, diesel particles from lorry exhaust had congealed in and blocked the sensor. We upgraded the backup system but it was never used again.
Thankfully the Distribution Manager was very grateful. He gave me a case of scotch for my troubles.
We were young and daft in our early mid-twenties. We should never have risked the car journey, even though we saved the day. Even so, I enjoyed my time as a computer operator!
-
DID WE PUT IT OUT?
John Ellis FIAP, recalls a computer operator ‘adventure’.
Back in the mists of time before becoming a programmer, I was a computer operator.
Being a computer operator in a mainframe environment was a time of great fun (when management went home) and frustration (when magnetic tapes failed). We had a lot of leeway and people wanted processing time which made one feel quite important. Still, I’m wandering away from this little gem of a story…
One night we were running two ICL 2946 mainframes to complete the overnight batches, as we did every night, when suddenly one of the printers, spooling out around 3000 pages of reports, just stopped. The two of us walked over to the printer and lifted the hood to see whether it had run out of paper or jammed. No lights were showing but on opening the lid smoke bellowed out and up to the ceiling onto a smoke detector. No alarm however (not surprising as we’d quietened down our alarm bell by stuffing a punchcard into it!).
What could two 20-something year-old computer operators do? We went over to the fire extinguishers, read the instructions and picked up the correct one for an electrical fire. Well I say one, that’s one each. Returning to the printer we checked it was switched off and proceeded to empty both extinguishers into the printer. Certain the fire was out (there was no evidence of flames prior or after the printer cut out) we shut the fuse off to that piece of equipment and switched the printing to an alternative printer.
Dutifully we filled in the operations log, handed over to the day shift, explained the issue and went home.
Next evening we returned. In the ops log was a note from the Operations Manager and the Bureau Manager explaining the ‘proper’ use of fire extinguishers and about how the budget costs of such items should be considered in the future. Okay. Yes. We got a bollocking!
Looking back I would do it all again. I enjoyed being a computer operator.
-
RISING ROBOTIC MALFUNCTION
Robotic Malfunction Raises Safety Concerns Amidst Growing Automation Trend.
In a recent incident that has sparked discussions about safety in automated workspaces, a robot at a manufacturing facility caused an accident resulting in injury. The event has shed light on the potential risks associated with the rising use of robots in industry.
While the details of the company’s liability are still under scrutiny, this occurrence adds to a series of robot-related accidents that have raised alarms globally. From assembly lines to surgical units, the integration of robots into human environments has not been without its perils. Incidents have ranged from minor malfunctions to severe injuries, emphasizing the need for stringent safety protocols.
In terms of legal repercussions, the prosecution of such incidents is a complex matter. Robots, being non-sentient entities, cannot be held legally responsible for accidents. However, companies might face legal action if found negligent. This could include inadequate safety measures, failure to adhere to industry standards, or insufficient training for staff working alongside robots. Regulatory bodies are pushing for more rigorous standards and clearer guidelines on human-robot interaction to prevent future accidents.
As the investigation into the recent incident continues, experts are calling for a balanced approach to automation—one that prioritizes safety without stifling innovation.
Please note that for the full details and to ensure accuracy, referring to official reports on robot accidents is recommended. The information provided here is a general summary and should not be taken as legal interpretation or advice.
Discuss Further
To find out more about this or the IAP contact John Ellis; Senior Partner, Fellow of the Institution of Analysts and Programmers and CTO of Wellis Technology. John’s main focus is using IT to leverage efficiencies and reduce cost for small businesses by introducing systems and processes to short-circuit time-consuming manual processes, improving workflow and the customer experience and reducing cost. Follow John on Linkedin: https://www.linkedin.com/in/johnceellis/.