Priorities – Health, Family, Work – in THAT order


Projects have many priorities. But if you lose sight of fundamental priorities you undermine all the other priorities you establish for the project.  Here’s some dialog from one of my favorite series, The Tudors:

King Henry VIII:
In these last days Your Grace, I have been thinking a great deal about loss. What loss Your Grace, is to man most irrecoverable?

Charles Brandon, 1st Duke of Suffolk:
His virtue.

King Henry VIII:
No, for by his actions, he may redeem his virtue.

Charles Brandon, 1st Duke of Suffolk:
Then his honor.

King Henry VIII:
No, for again he may find the means to recover it. Even if some man recovers his fortune he is lost.

Charles Brandon, 1st Duke of Suffolk:
Then I cannot say, Your Majesty.

King Henry VIII:
Time, Your Grace. Of all losses, time is the most irrecuperable, for it could never be redeemed.

Wisdom from a Fortune 500 CEO:

“Your health comes first. Your family comes second. Your job comes third. Recognize and organize the first two so you can take care of the third.” 

Stupidity from a CEO:

“Could you work 130 hours in a week?” The answer is yes, if you’re strategic about when you sleep, when you shower, and how often you go to the bathroom.” 

Advertisements

Surveillance Cameras Made by China Are Hanging All Over the U.S.


To this article: https://www.wsj.com/articles/surveillance-cameras-made-by-china-are-hanging-all-over-the-u-s-1510513949  I say:

As a CISSP and someone who follows cybersecurity closely I think the US will one day sorely regret its complacency about cybersecurity.  We have not yet been on the receiving end of a real cyberattack that will cripple infrastructure and commerce and create real chaos. But we can be sure our adversaries have many of our most serious vulnerabilities carefully mapped out and have a plan ready to exploit them. But until that day, our adversaries will be happy to take advantage of our stupidity – and gross negligence in many cases – by staying under the radar as much as possible while they exfiltrate commercial IP, military and government secrets, and manipulate social media to their advantage.  PS: Equifax – please double check that you are, in fact, really encrypting your data.

 

AI That Can Build AI


How quickly will AutoML or its equivalents put this powerful technology into the hands of people who aren’t smart enough to understand it? I know the answer.  A LOT SOONER THAN ANYONE THINKS.  Past performance is the best predictor of future performance.

USB Stick Found in West London Contained Heathrow Security Data


From: https://www.theregister.co.uk/2017/10/30/heathrow_usb_security_blunder/?mod=djemCybersecruityPro&tpl=cy

Yikes. Per the final sentence in the article: “As for the wider implications, they barely need spelling out: had the chance passerby been someone less kindly disposed towards the UK than the finder of the stick, the consequences could have been seriously bad.”

Seriously bad.  Are they really even needed anymore? It’s just another security risk on top of a very long list of security risks. Is there really THAT much value add to allowing their use?

An Appalling State of Affairs with Social Media. But AI? No Worries – NOT!


So, for anyone who just watched the congressional hearings on social media, especially the final 10-15 minutes – it is an appalling state of affairs. We are “totally unprepared” for what we’re dealing with regarding thought manipulation of the US population.  After watching this hearing do I have confidence in our technical gurus keeping a lid on all the risks AI presents? Not only no, but hell no.  The Silicon Valley tech gurus didn’t see or respond to this massive attack on our democracy, as well as fomenting hate and terrorism, and I’m not at all confident that they aren’t blind to the potential misuse of AI.  Maybe runaway AI is LOW PROBABILITY, but might be HIGH IMPACT which suggests using EXTREME caution.

To Keep Up With AI, We’ll Need High-Tech Brains


Re this article: https://www.wsj.com/articles/to-keep-up-with-ai-well-need-high-tech-brains-1509120930

Governments have a bad track record anticipating and preparing for the economic and social disruption caused by the previous industrial revolutions. We saw in the last election cycle how information technology can be used to manipulate thought and behavior. Government and industry didn’t recognize what was going down until after the election. Technology moves fast, governments move at a glacial pace – that alone is reason for concern. Whether sentient AI is fact or fantasy there can be no argument that it will have a profound impact on civilization. Maybe it already has and we don’t know it . . .  It is a dual use technology that will no doubt deliver some amazing benefits. But what about the other side of dual use? People are predisposed to underestimate risk and over-estimate opportunity. When I get a bunch of really smart technologists in a room at the start of a project and ask, “What are the major risks on this project?” what do I hear? Crickets.  Therein lies the problem. Descartes’ view on common sense is also relevant:

“Common sense is the most fairly distributed thing in the world, for each one thinks he is so well-endowed with it that even those who are hardest to satisfy in all other matters are not in the habit of desiring more of it than they already have.”

Surprise, surprise surprise!


http://www.kurzweilai.net/alphago-zero-trains-itself-to-be-most-powerful-go-player-in-the-world?utm_source=KurzweilAI+Weekly+Newsletter&utm_campaign=8d0782fe16-UA-946742-1&utm_medium=email&utm_term=0_147a5a48c1-8d0782fe16-282205105

This is not the first surprise re AI, nor will it be the last. I think it will progress from, “Wow! Amazing” to “Uh oh” at some point in time. What would “uh oh” look like? Haven’t really seen much written about that . . .

Anatomy of a Calamity – HOW THE VA’S AURORA HOSPITAL PROJECT SPIRALED OUT OF CONTROL


Article: http://extras.denverpost.com/aurora-va-hospital/ 

Here is a case study that every project manager should read.  The level of incompetent project management is infuriating to read about as a taxpayer, veteran and project manager!  Portions of the article are repeated below. Major failures:

  • Communications
  • Gold plating the project
  • Scope creep
  • Informal procurement processes and use of an *entirely new* procurement process
  • Inadequate risk management
  • Failure to react to “Lessons Learned”

As usual, the culpable parties have mostly all retired collecting nice government pensions after leaving this fiasco.

To quote Marvin the Martian: “This makes me very, very angry.” My long standing recommendation? CLOSE THE VA. They don’t seem to be able to do much right and the bureaucracy there is dug in like ticks. Accountability is nowhere to be found.  ____________________________________________________________________________________________

The biggest construction failure in VA history began with a handwritten note signed two days before Veterans Day 2011.

That brief note became the pact to start work on a state-of-the-art medical campus spread across 31 acres.

Its signing also marked the moment when the VA hospital in Aurora began to devolve from a mismanaged project to a national calamity.

The VA could not hold up its end of the deal and control its designers, who initially operated under a contract that left the construction price blank. It later battled KT in court for 17 months and lost. The agency stonewalled elected officials as costs, delays and questions mounted, and its own investigative staff did nothing.

VA officials pressed ahead with the project despite repeated warnings — internal and external — about the project’s high risk of busting its budget.

Even now, there is no agreement on fully funding the new medical campus, which the VA admitted in March could cost a stunning $1.73 billion. The design includes features such as a curved lobby spanning two city blocks, 43 elevators and a vivarium for animal experiments. The cost is five times an initial $328 million estimate and nearly three times the $604 million construction target.

One reason the cost of the Aurora hospital has risen over the years is the VA decision to use a contracting method — known as integrated design and construct, or IDC — that is largely unfamiliar to the agency.

No major VA project had ever been completed using an IDC contract. None of its project leaders in Colorado were experienced with it.

From the project’s earliest days, there were issues with the design.

In January 2006,a high-powered coalition of architects and engineers was contracted to develop a blueprint for the facility. Later, work would be suspended twice as the VA changed size and budget estimates.

Tim Pogany, the VA’s project manager in Aurora, testified that he once considered firing the design team because they listened to the advice of a former VA secretary who told them, “The pie in the sky is what we’re shooting for here, so whatever you want from the Denver area, I will get you, whatever additional funds you need, so design me what the Medical Center wants.”

Pogany said the secretary was Jim Nicholson, a former Denver resident and Republican National Committee chairman who led the agency from 2005 to 2007.

A few weeks after agreeing to the KT deal, the VA wrote the design team and made clear it expected a plan that would fit within the budget. 9

Fromm responded in January 2012 with his own letter that accused the VA of ignoring them and withholding crucial information. 10

“Proceeding with construction on such a major project without a common understanding of and access to the project’s documents of record would be reckless on our part as it would be foolish on the government’s part,” he wrote.

Another factor was time. The VA didn’t have another firm lined up to peer review the design work, a critical oversight. This failure — coupled with already-poor communication — delayed the delivery of plans until August 2012, months behind schedule.

When they did arrive, KT attorneys noted the plans “had to be revised at least once, which had a significant impact on the subcontractor community who lost confidence in the project and its design.”

The lack of oversight was made worse by VA obstinacy. In dealing with Congress, agency officials often closed ranks and shielded internal decisions.

At a House hearing in May 2013 to examine the GAO report’s findings as well as the pace of VA construction, frustrated members of Congress got few answers.

“One of the most distressing items in the (GAO) report is that VA failed to learn from its mistakes as it went from project to project,” said Coffman, whose congressional district included the hospital starting in early 2013.

Lawmakers such as Bennet and Perlmutter said they told the VA that fighting KT was a losing strategy and that the agency should settle.

The VA never did. The result was a crushing decision against the agency in December by the U.S. Civilian Board of Contract Appeals. 12

The court found the behavior of VA officials did not comport with “standards of good faith and fair dealings required by law.” The VA never gave KT a workable design — in part because it “did not control its designer” — and, when presented with more cost-effective options, VA officials “paid no heed.”

Phillipa Anderson, who led the VA legal team that developed the Aurora contract and fought KT in court, retired last spring after being questioned about her role.

Every other senior executive involved, VA officials say, is gone from the project. Many have retired, and lower-level staffers are working elsewhere in the agency.

At a recent congressional hearing, VA Secretary Robert McDonald said nine of the 17 top leaders at the VA are new since he took the helm in July 2014.

 

 

 

Tickling the Dragon’s Tail


I read today about a recent *leap* in AI capability playing Go. We know that AI surmounted the complexities of playing Go about ten years ahead of schedule.
Regarding AI in general and the risks associated with it, I think back to the early development of nuclear weapons. In 1944, Los Alamos started a dangerous series of near-criticality tests on fissionable material – “the Demon Core.” Commenting on the risk, physicist Richard Feynman reportedly said that the experiments were “like tickling the tail of a sleeping dragon.” Eventually, a researcher made a mistake and the Demon Core went super-critical. There was a blast of blue light and a wave of heat. The scientist conducting the experiment reacted quickly and used his right hand to knock the dropped brick to the floor. In those few moments, he received a fatal dose of radiation. The core calmed down, but 25 days later the scientist fell into a coma and died from severe radiation poisoning. Will we have the opportunity to “knock the brick off the table” when we discover something has gotten out of control with AI? What? AI out of control? That could NEVER happen, or could it? Unintended consequences of human activity are everywhere you look. The stakes are high. Extreme caution is advised.

Google Taught A.I. How to Program More A.I.


I am not so sure I am comfortable with this. If: “According to Google CEO Sundar Pichai, only “a few Ph.D.s” currently have the skills necessary to create the most complex A.I. systems. The proliferation of AutoML’s methods could allow more engineers and even students to work with A.I. tech without necessarily having the specific programming know-how to build it from scratch.” Isn’t this how we have gotten into trouble in the past putting powerful tools in the hands of people who are not competent to use them?