Over the years I’ve vacillated between the two extremes of “roll your own code” and “throw together a bunch of libraries.” I suspect I’m not alone in this endless pursuit of “the best” way to build software.
In the end, we all know that each project is unique, and perhaps more importantly, its requirements will change with time. A library you rely upon today may not have all the capabilities you need tomorrow, or perhaps, after real usage data pours in, you realize you only need 1/100th of some module’s functionality and that you can easily write that bit on your own.
That said, more often than not,when it comes to building a project you are bootstrapping on your own dime, you tend to look for solutions and tools that are already out there.
That’s exactly how I started out on my latest project, which I began writing code for about two weeks ago. But today, I realized there’s value to writing some things on your own in the early stages, even if it takes you some extra time. Let me take a step back first.
Last week I was cruising along with my Vue.js-based front end, amazed at how easily I was able to make really large amounts of progress each day. Part of this was due to careful planning, and part was due to the rather permissive structure of Vue itself.
Over the weekend, I wanted to rough out a very simple prototype of in-browser editing for two pretty straightforward, but distinct use cases.
- The ability to edit a heading, represented by an
h2 element. This would only allow for editing the text itself, and perhaps very basic formatting like bold or italic, but would not allow “full RTE” capability, HTML entry, etc. It should however allow a user to paste a value copied from elsewhere, and convert it to just text.
- The ability to use RTE-like functionality to compose, edit or otherwise author a document in the context of a div, or similar
contenteditable enabled element. Editing here would allow for things like ordered lists, block quotes, hyperlinks, etc, and perhaps even the occasional image placement. Copy/paste should do a reasonable job of preserving formatting from external documents, and undo/redo should operate as natively as possible.
Immediately I thought of three nice-to-haves:
- Use something already available, proven and tested
- Ideally use the same code for both instances—potentially through configuration or an API—so I didn’t have dependency trees for each use case, twice the code to manage, etc
- Find something developer friendly, that is, something with modularity, a good API or some other way of extending and integrating the editor
Saturday came, and Saturday went. What I realized after doing a lot of research, was that I’d have to mock up the scenarios I wanted to test, and create a list of criteria I was going to use to judge what worked.
I picked up where I left off on Monday, and by Tuesday my head was spinning. I had roughly 20 simple prototype pages built out and none of them was really leaping out as a clear winner.
The strongest candidate for the full-fledged RTE was [Quill], which has a killer [v1 Beta out]. As far as a lightweight solution (and even some RTE functionality), [Medium.js] does a really good job leveraging modern browser features for simple editing.
But, this left me in the unenviable position of possibly having two solutions, one for each use case, or of extending one or the other to enable it to be used for both contexts.
While that was certainly possible, I found that Quill had one major flaw, undo/redo histories are maintained per-instance, which gets absolutely crazy when you have many instances to manage. It also does not emit events as much as I’d like, so I’d have to write some code there that would be “delicate” at best since it would live outside the core of the editor.
Medium.js lacks some of the more advanced RTE functionality, so I’d be doing a lot of coding to get it up to snuff for document editing. This is obviously more work than finding a way to restrict what a more feature-rich RTE can do.
And so it was that Wednesday I decided to write my own code to meet my use cases. By now, I have a very good definition of exactly what I need, and how those needs will evolve (at least for the next few months). I also have learned a lot from poking around inside so many great solutions that are out there.
Along the way though, I’ve also learned just how much support there is in the browser for creating editable content. Starting from [this article] that covers the basics of creating your own RTE, it became clear that it would;t be too hard to get something workable very quickly, then evolve it as I continued other feature development.
There’s loads of resources related to this topic, most notably Mozilla’s docs on [
execCommand] and a pointer to [some of the hurdles] that browsers put in your way. After about an hour, I had a passable editor that could handle RTE commands nicely, as well as the restricted case for editing a heading.
The one sticky wicket is, of course, copy/paste, but with some tweaking of [this solution] ([here’s a fiddle]), I’ve got just about all the external formatting I want to support in place, and none of the stuff I don’t want.
So, what did I learn? Why roll your own?
Well, I wouldn’t advise it universally. First, do this:
- Identify your critical functionality, your nice-to-haves and, any bonuses
- Identify what you don’t want (multiple dependencies, etc)
- Research what is out there
- Score them
- Build super-quick prototypes; aim for equal outcomes for better comparison (like [TodoMVC])
- Score them again
If your needs are met, great! If not, then maybe roll your own. If you do, consider:
What can you reuse from the exercises above? Not just code, but ideas and strategies.
Are there smaller, helper libs that can get you up to speed faster? Both Quill and Medium.js make use of other small dependencies to help with things like range selection or undo history. I looked at things like [clipboard.js] to see what I could “delegate.”
What is absolutely critical to build right now, and what can wait? Don’t ignore what can wait when you design your code, but plan to build it out later. In other words, build only what you need to validate your criteria, but keep growth in mind so you minimize having to revisit core code later.
Once you’ve got all that out of the way, start coding!
I honestly think the above steps are necessary to make the end result actually worthwhile. Too many times I’ve said, “f*%k it, I’m going to write this myself” and ended up wanting to tear my hair out. Measure twice, cut once, and all that.
In the end, if you’ve done your planning and your homework, I think you get:
- a solution that meets 100% of your needs
- less bloat
- less code to manage
a much better understanding of how stuff works
You can tell from the emphasis on that last one that I think that is the most valuable bit.
Good luck, and happy coding!
It’s been a long summer so far, but it has seemingly flown by. I know it has been a while since I’ve posted anything here, but it is for a couple of great reasons.
First, I’ve been spending some more time with my son—playing basketball, biking, hunting for blackberries (and Pokemon)—and with friends, both new and lifelong. Getting to spend time with people I love and whose opinions and outlook I value has been restorative and very welcome.
Second, I have nearly wrapped up my personal project A Life Alone. When I say “wrapped up,” I mean I’ve got the launch version ready and content lined up for the first month or so, with plenty to come after that.
Crucial though, out of getting that ready for launch, I’ve found a product idea I am both passionate about and that will (hopefully) be of use to many other folks. I believe it is a viable idea because there are others like it out on the web already.
Will mine be different? How? Why bother?
I hope it will be different, but that’s not the reason I am building it. I’m building it because I honestly care about helping others tell their stories, about helping people be heard, about helping people be expressive and creative. I also know I can deliver more than others, that I can do it for less, that I will serve my audience first and foremost, and that I will try my hardest to deliver something others will be proud to use as much as I am. It may fail or it may change, but I know I must give it a try.
Development has gone, and is going, very quickly, so I hope to have the beta up and running by the end of August at the latest. Stay tuned for more info here, and loads more process, tech and design posts as I go.
All the best,
Tesla v Fortune
Just a few hours after I posted yesterday, a couple different folks directed me to Tesla’s blog post regarding the Joshua Brown’s fatal crash on ‘autopilot’ and an article by Fortune that is critical of Tesla and CEO Elon Musk.
It took me some time to digest both pieces, as well as some additional information surrounding them, but the timelines they present are clearly different than those that were more broadly reported in the media.
The most contentious part is the gap in time between the initial accident, Tesla’s investigation and ultimately the NHTSA evaluation kicking off. Though, it is hard to say if this is just standard practice for accident investigations, or if there is anything abnormal to the process.
All things considered, it is a story that is still developing, and it would be speculative to comment on why the timeline has unfolded the way it has until all parties have published their findings and reports.
The remainder of the discussion over the SEC filing, sale of shares in Tesla, and other fiscal matters is not really relevant to the piece I wrote.
New tech, old fogey
I also got a few emails yesterday after publishing that asserted something along the lines of how I was being “anti-innovation” or was simply “resisting change as [I] get older.”
While it is true that I am indeed getting older, it is far from the case that I am somehow resisting change. My larger point was not about the fantastic new tech that comes out every day, but about two things:
- We, as consumers, and as human beings, need and deserve oversight of companies that are pushing at the forefront of technological innovation (which I’ll write more about below).
- We all need to be aware of the human costs (fatal or otherwise) associated with every advance in technology, and we should be sure that companies acknowledge those risks, provide adequate education about their technology, and are not utilizing the general public as a laboratory for validating hypotheses.
When I wrote about how Tesla’s confident attitude towards its technology can potentially lead drivers to invite greater risk through reckless behavior, I was less interested in chastising Tesla and more concerned with how we provide better education for drivers, greater oversight for emerging technology and generally work to further improve safety by not abusing technology or becoming overly reliant on it.
There is a wonderfully written, well considered article that covers much ground, from Mr. Brown’s fatal crash, to the first recorded steam railway fatality. In short, author Karl Stephan artfully establishes a pattern of learning from tragedy when new technologies are introduced to the world.
Mr. Stephan illuminates this notion of trust from a different angle, describing how the very success of a technology can be one of its greatest dangers:
Even if I had a self-driving car (which I don't), and after driving it for a while and learning what it typically can and can't do, I wouldn't feel very comfortable just sitting there and waiting for something awful to happen, and then having to spring into action once I decided that the car wasn't doing the right thing.
That's a big change of operating modes to ask a person to do, especially if you've been lulled into a total trust of the software by many miles of watching it perform well. Who wouldn't be tempted to watch a movie, or read the paper, or even sleep?
Indeed, when companies provide new technology that can save lives by removing some portion of human decision-making and/or responsibility for certain actions, the temptation to exploit that technology is always strong.
When airbags were first introduced, there were inevitably cavalier folks that believed the airbag was a stand-in for seatbelts, when in fact, some studies show fatalities rose with airbags and no seatbelt. As a result, a great deal of education was focused on how the two technologies were intended to be used in conjunction for the greatest benefit. Reporting after the first major study results were in shows just how successful airbags were (when used together with a seatbelt).
Much like Tesla’s ‘autopilot,’ anti-lock braking, the seatbelt, or countless other automobile innovations, the benefits were clear, despite some fatalities linked to the technology itself. The NYT article summed it up thusly:
"It is like the introduction of a life-saving vaccine," said Mr. O'Neill. "There may be a a few predictable fatalities, but on the balance there is a huge public health saving."
Our rights: consumers, citizens, humans
I wanted to take a second to elaborate on the subject of oversight.
When I write “oversight,” I do not want to suggest by default that that is government-led oversight. Indeed, as consumers, we already have some very great protections built into the Bureau of Consumer Protection, various state and local offices, and countless independent organizations that review products on behalf of consumers.
That said, I firmly believe we cannot rely on government as our sole advocate in the emerging tech landscape. Increasingly, the burden continues to fall on consumers / users / citizens, as companies push hard in ever-increasing directions. Indeed, the fact that several markets segments for new products didn’t even exist one, two, or five years ago, makes it improbable for any traditional regulatory body to cope with the pace of innovation.
The problem is not just in hardware, or tangible goods, either. As more of our data moves online, new aspects of our lives are quantified and tracked, and more intimate details of our lives are bought and sold for research or marketing, we risk losing control of aspects of our humanity that we never thought of as ‘for sale.’
While the vast majority of that data is used for the collective and individual good—health care, social welfare, economic equality, etc—we cannot discount the influence of bad actors working within the system, or corporate entities motivated by retiring profits to hungry investors and shareholders.
Our rights in a more digital world are a complex and tricky thing. New issues of data ownership seem to emerge on a daily basis from sources as varied as Facebook to the smart fridge that knows what you eat. Can an insurance company charge you higher premiums if the fridge cam shows you eat like crap every day? Some might say yes. But what about if your (hypothetical future) FitBit shows that you are building muscle, burning thousands of extra calories a day. Does the added consumption make sense now?
Regardless of the answers to the above questions, a larger, more fundamental question looms: who owns that data, and what are our rights regarding its use?
Almost universally at this point, when we sign up for a new service, buy a new piece of tech or otherwise participate in any activity that is connected to the internet, we have agreed to terms and conditions that vastly favor the corporation. Knowingly or not, we surrender our rights and our ownership to hundreds of services and partner business on a regular basis. In return we get the benefits—real or perceived—of the service in question, but we get very little of the traditional protections afforded to us in more established markets.
Moving forward, there is likely very little headway to be made with the corporations themselves. We will have to invest in better education for consumers, greater community resources for when things go wrong, and broader protections for individual citizens that balances the value of human life against the volume of consolidated corporate wealth.
A bit of fact checking
Finally, I wanted to dig into one thing Tesla asserts in some of their literature, and that Elon is quoted, by the Fortune article, as writing in an email thread:
“Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”
I cannot find accurate data regarding the total number of worldwide fatalities (cited above as 1M), but the claim that ‘autopilot’ will reduce fatalities by half is interesting, and has been brought up before. I know lots of other media outlets (even MIT) are covering this right now, but I thought I’d take a look, based on Tesla’s own numbers and some available math.
First, some data points:
- roughly 80K Model S sold* worldwide, with available ‘autopilot’ tech through Q2 2016 source
- roughly 63K of the above in the U.S. source source
- above based on ‘autopilot’ capable equipped cars started shipping in Oct 2014 source
- 253M cars on the road in the U.S. (2014) source
- 1.2B cars on the road worldwide (2014) source
(* I denote sold here because they may not all be delivered yet. For the sake of argument, I will assume they are.)
Tesla’s post after Mr. Brown’s death claims:
- “Among all vehicles in the US, there is a fatality every 94 million miles”
- “Worldwide, there is a fatality approximately every 60 million miles”
- 130M ‘autopilot’ miles through June 2016
So, let’s start the math!
If there have been 130M miles on ‘autopilot,’ driven by a fleet of 80K Model S vehicles, then we get an average of 1,625 miles on ‘autopilot’ for each vehicle:
130,000,000 / 80,000 = 1,625
That means that, since this is the first fatality, we’d get rates of accidents on ‘autopilot’ that look like:
1 / 80,000 = 0.0000125 'autopilot' fatality rate worldwide
1 / 63,000 = 0.0000158 'autopilot' fatality rate U.S.
By using the same number of miles (1,625) as an average for all of the cars worldwide and in the U.S., we’d get some total miles driven like so:
1,625 x 1,200,000,000 = 1,950,000,000,000 miles worldwide
1,625 x 253,000,000 = 411,125,000,000 miles U.S.
Plugging in the accident rates of 60M and 94M miles respectively, we get:
1,950,000,000,000 / 60,000,000 = 32,500 fatailties worldwide
32,500 / 1,200,000,000 = 0.0000271 fatality rate worldwide
253,000,000 / 94,000,000 = 4,373 fatalities U.S.
4,373 / 253,000,000 = 0.0000172 fatality rate U.S.
Thus, if we were to compare the rates of ‘autopilot’ fatalities, we’d get:
Tesla AP All Cars
Worldwide 0.0000125 0.0000271
U.S. 0.0000158 0.0000172
Indeed, world wide the rate of reduction is closer to 54%, but here in the U.S. the difference is closer to 9%, despite the vast majority of Model S’ being within the U.S., and thus potentially representing a far greater amount of the total miles driven (roughly 78% of all ‘autopilot’ miles).
The difference may be attributable to a great many things. Certainly the ratio of Tesla’s cars to total cars world wide is far smaller than it is to total cars in the U.S.
But it may also be that U.S. safety laws, traffic laws and requirements on automobiles manufactured for the U.S. market do a better job of providing safety for drivers.
It may be because of more readily available trauma care that reduces the incidence of fatalities relative to total number of accidents.
The bigger point is that statistics are very easily manipulated and repurposed. We should always be careful when taking a statistic at face value, and ideally we should do our best to form our own opinions, given the available information.
In this case, it is clear why Tesla states the worldwide figures for PR purposes, but that doesn’t mean it is any more, or less, valid than other readings of the numbers.
Update (July 06, 8:00PM): Someone pointed out Tesla's rebuttal to a Fortune article which presents different information about the timeline of Joshua Brown's death. The only date I have referred to below is June 30, the date of Tesla's press release, while it appears the accident itself occurred on May 7.
I will write more tomorrow about the new information both articles present, as well as a response to some questions about my position on innovation, 'autopilot' and technology.
On June 19, the actor Anton Yelchin was killed in a rollover accident by his Jeep Grand Cherokee. That vehicle, it turns out, had been recalled back in April because owners had difficulty using the “monostatic” gear shift in the car. Here’s a photo of the shifter on Flickr, along with some more documentation of the recall.
The issue central to the recall was that Chrysler Fiat had received several complaints of drivers exiting the car without placing it in park. In order to understand just how bad the shifter design is, and how easy it is to make such mistake, take a look at this Youtube video from 2012 demonstrating how to use the new shifter as introduced in the 2013 Dodge Challenger.
Pay close attention around 0:26, where the demo driver pushes, then holds the shifter until it moves from D to R then finally P. By the release of Mr. Yelchin’s Jeep, some refinement of that delay appears to have been introduced, as shown in the operating video for it’s shifter.
Given that the recall involves 1.1M vehicles worldwide (800K in the U.S.), it is likely not every owner watched these videos, meant to augment, and in some cases, replace a printed owner’s manual. For example, prior to Mr. Yelchin’s accident, the shifter video for that vehicle had been viewed roughly 8,000 times (roughly 20K views since).
Given sales of 195,958 Grand Cherokees in 2015 it seems unlikely that all owners, much less all drivers, had viewed the relevant video.
At any rate, the video demonstrates how easily the vehicle may accidentally be placed in reverse, or how a driver may inadvertently not complete a shifting operation. It is just as easy to see how an analog shifter — one with immediate physical feedback — would likely help avoid such a mistake.
Furthermore, the design of this shifter appears to be the worst kind of design—the kind of design that tarnishes the public’s view of design—design for design’s sake.
- Does this shifter design enable something a more traditional shifter does not?
- Is this design required by the mechanics of the transmission?
- Is there a measurable improvement to the vehicle because of this design?
- Is the cost of re-training driver’s behavior worth the tradeoff in design relative to traditional shifter?
When ‘flappy-paddle’ shifting was introduced on some high end vehicles such as the Ferrari F355 (1997), they had already been in use in Formula One since the late 1980’s. Loads of kinks had been worked out before introducing the design and gearbox to consumers. It does not appear to be the case with the Chrysler Fiat design.
Indeed, even for Ferrari, early designs were not considered adequate. In the F355, a driver had to operate both paddles simultaneously, then select from a menu of park, reverse or neutral. As this was still perhaps prone to error if the wrong option was selected, models such as the Ferrari California made reverse an explicit choice by including a large ‘R’ button on the center console.
In a statement date June 30, Tesla motors acknowledges the death of Joshua Brown, a Tesla owner and enthusiast who died while his car was using the car’s ‘autopilot’ feature. It is the first fatality in a Tesla with ‘autopilot’ activated, and Tesla is quick to trot out lots of numbers and statistics in an attempt to diminish the public relations impact of the accident.
Since ‘autopilot’ officially launched, Tesla has distanced itself from culpability by stating, “The driver is still responsible for, and ultimately in control of, the car.” The Guardian and others have underscored this position in their reporting:
Tesla is very clear about the fact that the driver is responsible for the car at all times and should be actively in control, despite the AutoPilot system: it will be the driver’s fault, not Tesla’s if the car ends up in a road traffic collision.
But I challenge you to interpret Tesla’s thirteen words (of 409) as a clear, legally binding commitment in the context of the ‘autopilot’ press release:
Tesla Autopilot relieves drivers of the most tedious and potentially dangerous aspects of road travel. We're building Autopilot to give you more confidence behind the wheel, increase your safety on the road, and make highway driving more enjoyable. While truly driverless cars are still a few years away, Tesla Autopilot functions like the systems that airplane pilots use when conditions are clear. The driver is still responsible for, and ultimately in control of, the car. What's more, you always have intuitive access to the information your car is using to inform its actions.
Especially when surrounded by statements about how the functionality will take care of tedium and danger, this paragraph feels more like an invitation to turn on ‘autopilot,’ kick back, and surf the web all the way from your house to your office than an advisement to exercise caution and restraint. 
However, as far back as October 2014, legal experts and policymakers had begun debating Tesla’s legal responsibility, or that of any autonomous or enhanced vehicle manufacturer. Seemingly in contrast to Tesla’s distancing itself from legal culpability, as David Snyder states for this 2011 Wired article (where “bring in” refers to incorporating the manufacturer in legal proceedings):
The driver is presumed to be in control of his or her vehicle, but if the driver feels that there have been some facts supporting the notion that the equipment caused in whole or in part the accident, that driver would probably bring in the manufacturer of the equipment.
That Wired piece discusses both autonomous driving, as well as innovations that assist drivers, but in both cases it is fair to say that legal issues around who is at-fault, who is responsible for damages and much more, are far from settled.
When one of Google’s autonomous test vehicles caused a minor accident back in February, lots of reporting was focused on confluence of human errors in judgement that led to the accident. Google itself seemed to come down more on the side of human error, while timidly acknowledging “some” responsibility:
In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.
Subsequent reporting shows how murky the waters are when it comes to the determination of responsibility:
The current law means that if a self-driving car crashes then responsibility lies with the person that was negligent, whether that’s the driver for not taking due care or the manufacturer for producing a faulty product.
Is the driver responsible for not intervening? Is Google responsible because it’s software failed to account for this scenario? Is it even possible to account for all scenarios, so long as human drivers still drive alongside autonomous vehicles? What about when a child dives in front of an autonomous vehicle that cannot stop in time? What about when an autonomous vehicle is hacked? Is a death caused by an autonomous vehicle “manslaughter” in the traditional sense?
Clearly, there is ground to cover.
Prior to Mr. Brown’s fatal crash, drivers of Tesla vehicles have experienced and reported some dicey moments while on ‘autopilot’ that point more to the software and hardware than human negligence. But, to varying degrees, drivers are making the case for fully computer-controlled autonomy over human intervention with their actions.
A positive experience with Tesla’s ‘autopilot’ as early as October of last year has a team of drivers crossing the U.S. in record time, earning a post-hoc endorsement from Elon himself.
However, drivers have also continued to abuse what ‘autopilot’ can or should do by posting videos of what is often very ... reckless ... behavior online. 
In the days since the crash it has been reported that Mr. Brown was watching a Harry Potter DVD at the time of his crash, though it would be irresponsible not to mention that much about the crash is still under investigation.
To be fair, as stated in Tesla’s press release regarding Mr. Brown’s accident, the system will demand ‘hands-on’ contact with the wheel every so often, which is difficult to discern in the videos above.
The system also makes frequent checks to ensure that the driver's hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.
I am of two minds here.
It is abundantly clear that the behavior of some drivers is irresponsible, reckless and endangers the lives of others on the road, as much as their own. Though the actions of a few cavalier Tesla owners should not condemn them all, it would be difficult to see how such behavior does not lead to further injuries or fatalities.
On the other hand, given the tone of Tesla’s initial press release for the launch of 'autopilot', and the company’s half-hearted tsk-tsk-ing of those using ‘autopilot’ recklessly, it is easy to see why Mr. Brown and others continue to experiment with their cars.
Do we blame the driver that receives a ticket (in the third video above) when he does not manually intervene as his Tesla cruises along at 75 in a 60 zone? Has Tesla truly ensured driver safety in this case? Does Tesla care more about optimizing travel time over human safety (both in the vehicle and around it) when it seems ‘okay’ to ignore certain laws?
Here’s my issue.
Allowing big corporations to maintain a laissez-faire attitude with regards to their culpability for human lives is dangerous.
Google’s AV operators are presumably well informed about the risks and responsibilities that come with heir job before they ever set foot in a car. Thus the millions of miles and tens-of-thousands of hours of that Google vehicles have been on the road, gathering data, are a formal part of someone’s job.
In contrast, Tesla’s 130 million miles of ‘autopilot’ data is gathered from driver-owners of the vehicles. Presumably owners get info about this at purchase. While all of that data is analyzed and fed back into the decision-making matrix used by ‘autopilot,’ it serves Tesla more than it does any individual owner.
Put more clearly: Tesla owners are de facto guinea pigs for ‘autopilot’ versus Google employees who are compensated for their time and whose job it is to test and gather data.
Chrysler Fiat’s rollout of a shifter design that clearly caused well-documented confusion is a similar example of disregard for consumer well-being.
Certainly their factory engineers and testers logged hundreds or thousands of hours on test vehicles, but those folks are trained users, operating the equipment day in and day out. Offloading user manuals to Youtube videos seems like another slippery way to meet some abstract legal requirement for documentation; a place to point to when something goes wrong, with a “Well, we did post a video on xyz.”
It doesn't matter if a company is based in San Francisco or Detroit: Innovation for innovation’s sake, or ‘disruption,’ without accountability is something we need to take a long hard look at.
Autonomous vehicles are just one small part of this big picture. There are other easily observed scenarios where people’s lives, or livelihoods, are easily taken advantage of by companies that are keen on ‘disrupting’ some aspect of society:
Uber, Lyft and other ridesharing companies that force costs onto drivers while treating drivers as contractors to avoid paying employee benefits such as healthcare, etc. Those same rideshare companies also face scrutiny over not compensating riders’ in legal or accident claims.
Airbnb and other home-rental services face mounting legal pressure over zoning laws, licensing and other legal challenges similar to ridesharing companies. Cities like Seattle are considering limiting rentals within the city, which can adversely affect those who have built up business around renting homes, even when they comply with current law, pay taxes, etc. Further, Airbnb has manipulated statistics in order to paint a more favorable view of the company, particularly after large PR headaches such as this one where an Airbnb ‘super host’ is inexplicably and unceremoniously removed from the platform.
Theranos, a biotech startup once valued at $9B, is now under Congressional scrutiny for false claims that it could detect hundreds of diseases from a single drop of blood. In it’s letter, the Congressional Committee writes: “Given Theranos’ disregard for patient safety and its failure to immediately address concerns by federal regulators, we write to request more information about how company policies permitted systemic violations of federal law.”
Solyndra, a solar company that lied to secure $500M in federal money, may not have directly harmed any given individual, but that money comes straight from U.S. taxpayers as part of a larger incentive for green energy.
Fast Company summed up these legal woes and more, in a piece aptly titled, The Gig Economy Won't Last Because It's Being Sued To Death.
The common theme in all of this is that very large, venture-backed (save one of the above) startups are prioritizing profit above all else. Often it is the users—members, drivers, renters, owners—that bear the brunt of hardship, loss, and legal fallout, when companies go head-to-head with governments and existing market forces.
In one sense, we cannot place blame on them, since venture capital demands a high rate of return in exchange for cash injections. But in many practical ways, as well as conceptual ones, these companies are doing very real damage to the very people they rely on for their data, their products and indeed their very success.
In a traditional employer/employee relationship there are well-established protections for both parties. As progress continues, it is clear that keeping legislation apace with innovation is impractical. But to avoid a world run by a new kind of carpetbagger, we will have to examine what it means to be an employee, to be a consumer, to be a citizen, and to have rights. And we’ll have to do it fast.
 One other thing worth noting, the comparison in this press release to the autopilot feature on large aircraft is a bit unfounded. Pilots of major commercial aircraft (those on which autopilot systems are found) must log 1,500 flight hours before they are eligible for hiring by U.S. airlines, and 1,000 hours before being allowed to captain a flight. Obviously no similar training is required for operating an autonomous or partially autonomous automobile.
 All those videos via this Guardian article
Someone recently shared a link to Has Design Become Too Hard?, an article by Jeffrey Zeldman about the changing landscape of designing for the web and the tools that we use to do that job. 
Ignoring the salesmanship for a moment, the article makes a couple of important points, and misses something I think we overlook in our profession: not everyone wants to do it all on their own.
The pull quote says it all:
So whether you use a framework as part of your design process or not, when it’s time to go public, nothing will ever beat lean, hand-coded HTML and CSS.
Here, Zeldman is referring to the excessive markup, styles and scripts added to any project by utilizing a framework.
Yes, CSS and JS frameworks help us get from zero-to-functional very rapidly, and with relatively little investment from the overall project or organization, allowing us to try new ideas, create prototypes, and test interactions without committing to anything prematurely.
But too often we conflate optimizing for the machines that deliver our content, with optimizing our own workflow as designers and implementers.
Constraints like those Jeremy Keith and Karolina Szczur wrote about serve as a reminder that knowledge of the fundamental technologies of the web are also the most fundamental part of our toolset, and our jobs.
Truly understanding and empathizing with users whose experience of the web is different than our own—whether that’s because of a 2G internet connection or a physical disability—forces us to be as prudent as possible with every tag and every style we write.
Put differently, knowing when to use a
<div> as a wrapper to hook some styles to, versus having a framework make that decision for you is something learned by getting one‘s hands dirty, not an abstract level.
For us to make efficient use of those higher-level abstractions that frameworks provide, we should absolutely be using the most semantically meaningful, functionally discrete markup and styles. That way, when our designs get turned into hyper-efficient delivery code, we minimize the amount of needless containers and frivolous markup required to be truly modular.
Zeldman puts a different spin on something I’ve written before:
At the end of the day, so long as the browser remains our de facto target, then the end result will ALWAYS BE HTML.
We’ve Seen It All Before
I’ve never seen a design idea spread faster, not only among designers but even clients and entire corporations […] Yet, after a euphoric honeymoon, designers soon began complaining that responsive design was too hard, that we’d never faced such challenges as visual people before. But haven’t we?
Indeed we have! Zeldman rightly points out that, since the very early days of the commercial web, we’ve seen challenges such as screen resolution, browser support, and even different shaped pixels. For a visual medium, this is about as infuriating as it gets.
Once again though, if we look at the broader profession of design, constraints have been with us all along, pushing us forward and inspiring our creations.
Newspapers have column widths and—gasp—actual folds! Print underwent something akin to the webfont revolution with the invention of movable type. Radio content must articulate complex ideas and stories using only audio. Television has faced resolution and color rendering issues that should be familiar to us all.
All of those media are still undergoing technological innovation alongside—and often in conjunction with—the internet, but with legacies, traditions and practices that are decades or centuries older than the web.
Consider the adjustments that these media have made, not just visually, but in production, consumption and context of the content they deliver. They were all challenges at the time, but they spurred innovation, and in many cases, complete reinvention.
What Is A Framework?
Here’s the one thing I think this article does not address, the issue of the overloaded term “framework,” and indeed of the use of the word “code” as well.
Without diving into the abyss that is the debate over what constitutes the practice of coding, or dipping into the “should designers code” fray, I would like to try to shine some light on what I will call design frameworks versus development frameworks. (I know, I’m treading on verrrrry thin ice here!)
Broadly then, 
- Design frameworks is what I will call tools that help you establish and reuse design patterns, enforce styles and aid in the transition from sketches to visual and interactive prototyping
- Development frameworks is what I will call tools that allow design patterns to be encapsulated in code, creation of reusable modular markup, binding of templated markup to data, and aid in the move from prototype to production
In the case of Zeldman’s article, it would appear he is referring primarily to design frameworks. Things like Bootstrap, Foundation or Skeleton, or perhaps even simpler tools like Responsive Grid System or 1140 Grid.
All of that said, I think it is disingenuous, perhaps even detrimental, to use the catch-all term “frameworks,” without providing the scope of its usage. This is where I think we need precision with the term framework. Generally speaking, I would say it like this:
Development frameworks possess criteria that make them more about the implementation of a product, than about the establishment of the visual and interactive patterns of that product.
As It Relates To People
I do not mean to diminish the skills or talent of any individual person or team, but I want to firmly state that I do not believe any single person must be fluent in all aspects of product development.
While it is truly amazing that there are people who can create an entire product—from stylesheet to database query—all on their own, this is not necessarily what everyone wants to be.
Often, if you are hacking alone, or are a small startup, you may make use of both of these categories of framework (and much more) on a daily basis, just to try out even a simple idea.
Larger organizations may be organized into teams such as visual design and web development, or they may be organized as cross-functional teams focus on a particular feature or project. In these cases, familiarity with different frameworks helps, but day-to-day work might not require you to work outside a few specific tools.
There is no one single “designer” archetype, any more that there is one for a developer, author, artist or actor. James Franco, for example, ticks the actor box, but he’s also done some other stuff.
So, when we talk or write about frameworks, learning to code, or the occasional narwhalicorn, let’s be sensitive to a couple of things:
- Not every designer wants to code, just like not every developer wants to design.
- Not all frameworks are created equal. Bootstrap includes behavioral JS where Skeleton does not.
- Not everyone needs all of the things, all of the time.
- We all want the same thing in the end, the best possible experience for our users.
- Number four will mean different things to different people—a db designer will want performant queries, while a marketing manager will want better conversion rates—but they all translate to a better experience.
Okay, that’s my $.02, have a great weekend!
 Apologies if I’m dredging up old news, as I’m not sure if the article is new or not since CommArts has one of my personal pet peeves, posts with no date.
 Please note, both the terms design frameworks and development frameworks are further “scoped” here as part of a web design/development toolkit. I recognize that platforms like iOS and Android have their own tools, as much as there are thousands of development frameworks, scaffolds and libraries all the way down the stack to the db. It simply felt incorrect to prefix the terms with “web” in this discussion.