I Know This Much is True: Thoughts on AI Hallucinations

AI is amazing. For example, it’s revolutionizing search so you can find stuff faster and more efficiently than ever before. Like in 2023, when someone asked Google’s Bard for some cool things about the James Webb telescope he could tell his 9-year-old, and right away it reported that the telescope took the very first picture of a planet outside of our solar system. Cool, right? And at the other end of the spectrum, in 2022, when a researcher was digging into papers on Meta’s scientific-specific AI platform Galactica, he was able to find a citation for a paper on 3D human avatars by Albert Pumarola.

Unfortunately, both of these results were bullshit.

The first picture of a planet outside our solar system happened 17 years before the James Webb telescope was launched, and while Albert Pumarola is a real research scientist, he never wrote the paper Galactica said he did.

So what the hell is going on?

Both of these are cases of “hallucinations” – stuff that AI just gets wrong. And while those two examples come from LLMs (“Large Language Models” – “text based platforms” to the rest of us), they also happen – with spectacular results – in image-based generators like Midjourney (check this horror show out). But right now, let’s stay focused on the LLMs, just to keep us from losing our minds a little.

And let’s start by reminding ourselves what AI really is: a feedback loop that generates the “next most likely answer” based on the patterns it sees in the data you’re exposing it to. So, hallucinations (also called “confabulations” by the way) occur because, as Ben Lutkevich at TechTarget.com writes, “LLMs have no understanding of the underlying reality that language describes.” Which, interestingly enough, is fundamentally not how humans understand language. As Khari Johnson writes in Wired.com (as reprinted in Arstechnica.com):

UC Berkeley psychology professor Alison Gopnik studies how toddlers and young people learn, to apply that understanding to computing. Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

In other words, for humans, language – words, etc. – represent things in the real world. But for LLMs, words are just the elements in the patterns that they see in the data. Which, yeah, they are, but for humans, those “patterns” are in service of something called “meaning”, which for LLMs they’re not. They’re just patterns. And because patterns are a significant part of language, when AI platforms replicate them in answers back to us, their results sound believable. Like the telescope thing. Or the scientific citation.

But I also think there’s another reason why they work on us. We’re sort of pre-programmed to believe them just because we asked the question.

Think of it this way. If you’re looking for information about something, in a sense, you’ve created a pattern in your head for which you are seeking some sort of reinforcement – that is, an answer that fits into the pattern of your thinking that generated the question. Like the telescope example above – one could assume from the question that the person already has some awareness of the telescope and its abilities. Perhaps they’d read this article in Smithsonian magazine about seven amazing discoveries it had already made - but felt that the article was too esoteric for a nine-year-old. The point is, they had an expectation which is, I think, a sort of form of pattern. So when the LLM provided an answer, it plugged very neatly into that pattern, creating an aura of truth around something that was fundamentally false.

And in a sense, this is not new news. Because as every grifter will tell you, for a con to succeed, you gotta get the mark to do at least half the work. And where AI hallucinations are concerned, we sort of are.

So, hallucinations are bad and we have to be on our guard against them because they will destroy AI and us, right?

Well, no, not exactly. In fact, they may actually be a good thing.

“Hallucinations” says Tim Hwang, who used to be the global public policy lead for AI at Google, “are a feature, not a bug.”

Wait, what?

At the BRXND conference this past May, Tim used the metaphor of smartphones to explain what he meant. First, he reminded us, smartphones existed. Then, he explained, a proper UX was developed to not only use them effectively, but to take advantage of their unique capabilities, capabilities that revolutionized the way we think about phones, communicating, everything. Tim believes we’re in a similar, sort of “pre-smartphone-UX” stage with AI, and that because our interfaces for it are extremely crude, we’re getting hallucinations. Or perhaps said another way, the hallucinations are telling us that we’re using AI wrong, they’re just not telling us how to use it right yet.

This “using it wrong/using it right” idea got me thinking as I plowed through some of the literature around hallucinations and came across this from Shane Orlick, president of writing tool Jasper.AI (formerly “Jarvis”) in a piece by Matt O’Brien in APNews:

“Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas — how Jasper created takes on stories or angles that they would have never thought of themselves.”

Now sure, this could just be a company president looking at the hallucinations his AI is generating as a glass half full, as it were. But it got me thinking about the idea of creativity – that in a sense, hallucinations are creativity. They may not be the factual answers you were looking for, but they’ve used those factual answers to springboard to something new. You know, like creativity does.

I mean, who among us has not sat in a brainstorm and come up with some wild idea and had someone else in the room say “well, yeah, that makes sense, except for this and this and this” (just me? Oh…). How is that different from the hallucinations we started this essay with? “Yeah, that James Webb Telescope fact makes sense because an exoplanet is the kind of thing it would see, but it didn’t take the first picture of one because of this and this and this.”

And better yet how many times have you sat in brainstorms and someone came up with an idea that wasn’t perfect, but that was great nonetheless, and that the team was able to massage and adjust to make it perfect? Why couldn’t you do that with AI Hallucinations?

Could the path forward be, not the elimination of hallucinations, but the ability to choose between outputs that were proven, documented facts and outputs that were creativity based on proven, documented facts? Two functions serving two needs, but resident in one place. In much the same way that in the early days of the internet, we had to wrap our heads around the idea that sometimes we went to a website for facts and information, and sometimes we went to play (and sometimes we went for both. Okay, forget that last example).

Now look, I could be completely wrong about all of this. About hallucinations, about telescopes, about what Tim Hwang meant, about the nature of creativity, about the early days of the internet, about all of it. But it would seem to me that inquiry, even one as faulty as mine, is likely the best path to untangling AI, especially in early days like this and especially as we encounter challenges like these. Or said another way:

“The phenomenon of AI hallucinations offers a fascinating glimpse into the complexities of both artificial and human intelligence. Such occurrences challenge our understanding of creativity and logic, encouraging us to probe deeper into the mechanics of thought. However, we must approach this new frontier with a critical and ethical perspective, ensuring that it serves to enhance human understanding rather than obscure or diminish it.”

You know who said that? Albert Einstein. At least according to the internet. And he was pretty smart so that made me feel much better about hallucinations. And you should too. I think.

Follow the Money: A Different Way of Thinking about "Class"

People don’t read, or so I am told. Business people doubly so. Those who inexplicably do, do not read fiction. And those rare few who do read fiction, do not read 19th century fiction.

And yet it occurs to me that buried in a 19th century novel was an insight into a better way to think about class – and therefore, how to market to people – than anything I’d read in more recent, or business-related, books. And it was this:

Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness.

Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.

I know what you’re thinking. You’re thinking “Duh.” You’re thinking “Spend more than you have and you’re unhappy. Spend less and you aren’t. What could be more obvious?”

And I agree with you.

So why aren’t we applying that to how we define economic classes?

Because our current terminology is frankly meaningless. Virtually every person I have ever met has told me that they had a “middle class” upbringing. And I can tell you categorically that what each of them meant by “middle class” was wildly different from a financial perspective.

That doesn’t mean they were lying. Instead, I think they were measuring based on the simple fact that they had friends or acquaintances or neighbors who had more money than they had, and they had friends and acquaintances and neighbors who had less money than they had. Which meant that from where they stood, they were in the middle. Thus, middle class.

And of course in America, where there has always been a vast mixing of classes (significantly less so now than in the past, to be sure), one could almost always see oneself “in the middle”. A situation only exacerbated by popular culture and – wait for it – advertising, which shows everyone every great thing they can, and cannot, afford, cheek-by-jowl with a level of soul crushing poverty in parts of the nation one would never otherwise encounter. Are you “Real Housewives of New Jersey”? No? Are you “The Wire”? No? Then you must be middle class. 

But Micawber’s observation in Dickens’ 1850 novel may provide us with a better path. What if we measure less by context, and less even by raw dollars, and more by receivables and expenditures.

What if we said – broadly speaking – that anyone making one dollar and more than they needed to meet their expenses, was rich. And that anyone making one dollar – and less – than what they needed to pay their bills, was poor. And that anyone making exactly the same as their expenses – was middle class.

This would do two things right off the bat (three if you count annoying every economics teacher I ever had). First, it would speak better to psychological drivers than the old model by disconnecting the definitions from dollar amounts. The lawyer pulling down a cool million a year – but who has annual expenses of 1.5 is living paycheck to paycheck as much as any “poor” person you would normally think of. And the pizza delivery guy who paid off his house and his car, and who has no credit card debt and no loans, and whose favorite vacation is camping down the road and fishing – that guy’s got more income than he can spend. And how else would you describe a “rich” person?

And sure, you can quibble with their decisions – look at all that alimony the lawyer is paying! Three x-wives? Wow! - or - Why doesn’t the pizza guy take a trip to Italy or buy a newer car? But practically speaking, their decisions are only relevant if we measure class purely by acquisitions and attainments. Of course, it may sound odd for a guy in advertising – the front line of consumerism – to advocate such an approach. But people in advertising know probably better than anyone how fleeting and ephemeral are the purchases people use to identify their success – so measuring anything meaningful by them is, um, meaningless. Which is probably why we push them so hard, I suppose.

And second, a taxonomy like this disconnects the groupings from geography. Every day, people are managing their personal finances in terms of national if not global economic trends. Your class isn’t really based on where you live any more than your entertainment choices are. Sure, sure, it’s more expensive to live in Manhattan New York than it is to live in Manhattan Kansas, but inflation, medical costs, grocery, gas and housing prices tend to trend similarly everywhere, even if they differ in degree specifically. A measurement that thinks of those broad trends in the context of personal economics is necessarily more useful to understanding why the people in those groups do what they do than one that says “you make more than X? Congratulations, you are no longer poor.” With ramifications professional, political and personal. 

Look, I’m no economist. I’m just a simple copywriter who’s trying to understand why people do what they do. Perhaps I’m wrong. Perhaps I am not. Or perhaps we should just use this new model until, as Micawber himself would advise, something better turns up.

Programmed By Fellows with Compassionate Visions: Some Thoughts on Constitutional AI 

Stop me if this has happened to you. You type a simple prompt into some handy AI generator and what comes out is more toxic than a landfill at Chernobyl. I mean, not just a little “off” but like wildly, deeply, disturbingly off.  

And then you remind yourself, oh, yeah, AI is just sophisticated math that looks for patterns in the data it is exposed to, and if the “data it is exposed to” is, you know, “the internet”, then it’s not that surprising that sometimes it produces content that is toxic, harmful, biased, sexist, racist, homophobic etc., since that stuff exists on the internet.  

Which makes sense, even if it doesn’t make it okay, right? 

So how do you make that not happen? Well, currently the strategy is mostly, “have humans look at the outputs and freak out if something horrific is being delivered and then fix it”. Which is fine, expect for two things.  

First, the point of AI, or at least one of the points of AI, was efficiency, how it freed humans up to do other things with their time. And if you have to go back and look through everything it’s doing and check to make sure it’s not horrifying, then it's less efficient. I mean, you might as well just write the stuff yourself. 

And second, you can’t scale humans. Again, one of the values of AI is the sheer quantity of content it can output insanely quickly – a quantity that it’s not realistic to have humans check over with the digital equivalent of a fine toothed comb. And, it should be noted, a quantity that is only going to get larger as AI evolves. 

So what do you do? 

Many have been exploring something called constitutional, or principles-based, AI. And recent advancements by a company called Anthropic (founded by former OpenAI rocket scientists and funded with some serious Google VC money) has been getting attention with developments they’ve made in this area to their own Generative AI platform called “Claude”. 

So what’s constitutional AI? 

In much the same way that a government has a series of rules and laws that reflect what it believes and what it feels is proper – and codifies those rules and laws in a constitution – constitutional AI does the same thing for AI. A human creates a set of “rules” and “laws” that sort of sit on top of what the AI is doing, to act as a check on the content.  

Sort of like, you ask the AI a question, it generates an answer based on the patterns it finds in the data you’re exposing it to, and then constitutional AI checks it to make sure it’s not generating an answer that’s horrifying.  

Or said another way, that it is generating an answer that is aligned with the beliefs and principles you’ve established in the constitutional AI. 

And it does it crazy fast, and it does it crazy voluminously because, you know, it’s AI and that’s how AI rolls. 

Which is great, right? Right. Hooray for progress. 

Now, what’s interesting about all this – or among the things that are interesting – is how in a sense, constitutional AI is sort of a very AI way of solving this problem. AI basically says “these are the patterns I’m seeing in the data”, right? So if you feed it data that says that the earth is flat, its gonna tell you the earth is flat, right? Because that’s the pattern. 

And if the constitutional AI you have sitting on top of it is filled with criteria like “discard any responses that endorse a non-flat earth viewpoint” well, you’re still gonna wind up with flat earth answers. A feedback loop on top of a feedback loop, as it were. And that feels dangerous because on the one hand, it’s reinforcing the biases, on the other hand, I don’t know it’s reinforcing the biases unless I dig into what the “criteria” is, and on the other other hand, how the hell is all of this making things faster and more efficient for me? 

Now you may say that I’m being absurd. And yeah, I get that a lot. And it’s entirely possible in this case since I’m still learning about AI. But here’s why I’m being absurd.  

Because a lot of the language in the literature I’ve been reading in this area keeps referring to “common sense”. That when they’re creating these constitutional AIs, humans will be providing “common sense” criteria “because AI doesn’t evaluate, it just looks for patterns.”  

Which right, I get that. Except in my experience, common sense is usually not that common. 

Look at the “common” things Americans can’t come to a “common” agreement on right now – about race, sex, gender, history. So what is this “common sense” that the literature acts as if it’s so obvious to all of us that it will obviously be inserted into AI as some sort of obvious criteria?  

And you know what else common sense isn’t? It isn’t static. Read what was “common sense” about race, sex, gender, history - 50, 75, a hundred years ago. About intellectual capacity. About morality. Things that would be horrifying today. Well, to some of us.  

Which means that periodically humans will have to update the “common sense” of the constitutional AI. Who? When? How? Because we’re not just talking about software upgrades due to advances in technology. We’re talking criteria around real cultural issues that will affect – often invisibly – the content that we will increasingly be relying on to provide us information.  

Now to be clear, I am in no way saying that constitutional AI is a bad thing. It’s a very valid attempt to solve a very real problem that will only get very much worse the longer we ignore it. And I applaud everyone who’s working on it. 

I just want to make sure we’re actually solving it, not just turning it into another problem instead. 

For Better or Worse: Thinking Differently About Problem-Solving

Some time ago I was cutting my grass, including the leaves and branches there, when my lawnmower suddenly made a horrible grinding sound, so I stopped. It turns out that what I thought was a pile of leaves, was in fact a pile of leaves and a wooden rake, and my lawnmower blade was hopelessly entangled with the tines. Well, not “hopelessly”, actually. I knew that it would just take a few minutes of wangling to separate it from the blade.

However, because of the way the rake was stuck into the blade, I was going to have to tip the lawnmower over and temporarily flood the engine. Which meant once I got the rake dislodged, I was going to have to wait about 20 minutes for the lawnmower to start. But if I didn’t tip over the lawnmower, well, I wasn’t going to be able to get the rake out, which meant I wasn’t going to be able to use the lawnmower to cut the rest of my grass.

In other words, it occurred to me that I had to actually make the situation worse before I could make it better. And this was kind of a revelation.

Because we usually don’t think that way. We usually think linearly. Here is a problem. I will solve it by doing x. Now things are better.

But this was sort of the opposite of that. And as ridiculous as it sounded, I was actually living it. I turned over the lawnmower, I flooded the engine. I extricated the rake. I turned the lawnmower back over. I tried to start it. It would not start. I waited twenty minutes. I started the lawnmower and went back to cutting the grass.

Worse, to make it better. My mind boggled. So I took this observation to friends of mine, friends who are smarter than me of course, to show them my discovery.

“Look at this” I said to a surgeon friend. “Sometimes solutions aren’t linear.” I said. “I had to make a thing worse to make it better! What do you think of that!”

“I think you’ve just described surgery” she said.

“I’m sorry, what?”

“Well, do you really think that opening someone up, exposing their inner organs to the outside world, rooting around in their sinews and blood and muck is actually making them instantly better? Of course you don’t. If you did, you would expect people to hop off the operating table ready to run a marathon. But nobody expects that, do they?” “well, um…” “There’s a ‘recovery time’, right? Recovery from what? Recovery from the surgery, from what we did to you. The very existence of ‘recovery time’, the very fact that everyone is so used to that idea, is proof that the idea of making things worse to make them better is, well, obvious.”

Disappointed but undaunted, I went to a friend of mine who teaches mathematics and said the same thing. That I had this thing that was a problem, and I sort of made it worse, in order to make it better. That this idea seemed to run counter to the way I thought things worked, you know A plus B equals C.

“Because it’s not really about addition, is it. It’s more like multiplication.” “I’m sorry?” “Where a negative times a negative equals a positive. Surely you remember that, right?” “Um, well…” “Did you not take mathematics in middle school?”

Now, setting aside my disappointment that I had not discovered some fascinating new … something … AND the fact that I seem to have fairly snotty friends, it occurred to me that if this is true in mathematics and medicine and, well, gardening, then perhaps it could be true in advertising.

That is, clients come to us with a problem: “Sales are down” or “Awareness is bad” or whatever. And they expect us to come up with solution that will make things better. You know, A + B = my boss is happy now.

But what if some problems in advertising are like the rake and the lawnmower? What if some problems need to be made worse before they can be made better? What are those problems? I don’t know – and its entirely possible that they’re very specific to each situation. But now that I’m aware of this idea, I wonder how many problems I’ve mis-diagnosed and provided less than adequate solutions for.

Which is not to say that if I had told the client “we have to make this worse first” that they would have reacted positively. Clients don’t want to hear “worse”. Clients’ bosses don’t want to hear “worse”. By and large, businesses are not built for it. Certainly stockholders are not.

And just to be clear, I’m not talking about some kind of William Westmoreland “we had to destroy the village in order to save it” thinking. That’s about obliteration; “making something worse” implies a deterioration within the context of the thing, not a total restart. I didn’t, for example, throw out the lawnmower. I just made it inoperable. Worse. That’s different from a sort of “blank slate, let’s start over” thinking - which is also a legitimate tool for problem solving, but which works by throwing the baby out with the bathwater. Like I said, I didn’t throw out the lawnmower and the rake and hire sheep to deal with my grass. I just made a bad situation worse, in order to make it better.

When I returned home, I found my son finally cleaning his room, as his mother had asked him to. Earlier it had been a mess. Now, it was a disaster. There wasn’t even a path from the door to the bed. Stuff was everywhere. I asked him what the hell he was doing. I told him this looked like a bomb had gone off. “Yes” he said, “I have to make it worse before I can make it better.”

I went back out to cut the grass…

That Thing You Do: Early Thoughts On AI

There’s a scene in Apollo 13 where Kevin Bacon needs to do some calculations. And he’s exhausted and under a lot of stress because, you know, he’s in a broken tin can a zillion miles from earth floating around with Tom Hanks and Bill Paxton. So he asks Mission Control in Houston if they can verify his math. And Mission Control says “sure, not a problem”, and then the camera turns to a row of math nerds with pencils who are going to run the numbers by hand, and then compare notes. That was how they did it in the days before calculators, in the days when computers filled a whole room and couldn’t be bothered to work on, you know making sure Kevin Bacon was toting up his figures properly. Four crew-cutted nerds with #2’s.

And every time I see that scene, once I get past the sort of archaic lunacy of it, I think “Really? That’s what those guys were there for? To do math? Couldn’t they have been doing something more, I dunno, important? Like maybe figuring out why the CSM LiOH canister and the LEM canister weren’t the same shape or something?

I’ve been thinking about all that a lot as I listen to everyone talk about AI and ChatGPT.

For most of our time on this planet, the only machines humans had were, well, humans. And yeah, the human body is great – it can do a lot of things. So if you don’t have a truck, well, you’re the machine that’s gonna get the load to market. If you don’t have a backhoe, you’re the machine that’s gonna dig the grave. And if you don’t have a calculator, you’re the machine that’s gonna run the numbers (crewcuts optional).

But like most machines that can do a lot of things, the trade-off is, it can’t do any one of those things exceptionally. Because it’s built for diversity, not specialization. A truck can make fewer trips than a human can, a backhoe can dig the hole faster, a calculator can run the numbers with fewer mistakes. But a calculator is less capable of dragging a load to market than you are and a truck is fairly useless where running the numbers is concerned. The human body can do all those things – not A+ perfect, but better than, you know, not at all. Which is the alternative.

When we look at AI and ChatGPT and all the others that have come out since I started writing this essay, it should be in that context: what have we been mediocre at that this new technology can free us from doing a mediocre job of, so we can focus on something we’re actually good at, indeed, better at than machines? As my buddy Howard McCabe asked, can it scan reams and reams of code for bugs faster and more thoroughly than a human can? Yep. And if it does, does that free up a human to think more deeply about what humans would really want that code to do and how they might use it? Yes it can. Because it can help us by doing better than us the things we are not built to do well. So why wouldn’t we want that?

But here’s what it can’t do. It can’t make quantity equal quality.

For while I think there are opportunities for it to free us up to do better work, I am concerned that we are falling into a trap that is rampant in advertising generally. Namely that more = more effective. Which, you know, no.

The fact is that more of what i don’t care about doesn’t make me care about it. More of what I don’t want, doesn’t make me want it. More is just noise, static, interference. More is just the stuff that actually gets in the way of the stuff that I do want cutting through. More is why people hate advertising (well, one of the reasons).

But “more” is the last refuge – well, the first refuge – of advertisers who are either too lazy or too stupid to really think about their customers. “More” is the strategy of marketers who don’t think their customers matter, or more dangerously, don’t think their own products matter, and so haven’t taken the time to find that unique quality, that unique difference, that unique thing that customers are missing and desiring that their product can provide, in order to really make a connection. They just say “What I say isn’t important - if that’s where my people are, that’s where I’m going to be too”. Well, yeah, pal but there were a lot of people at the Lizzo concert too, but 99.99% of them were only paying attention to one person.

“Just showing up” (as I have written elsewhere) is not a brand strategy, but a lot of what we are hearing right now is that AI and ChatGPT are the future of advertising because they will generate exponentially more content, which will let brands “just show up” an order of magnitude more than they do now. And agencies will likely fall for this because, well, there are a lot more bad, lazy and stupid ones than there are good ones. And this will undoubtedly elevate the public’s already keen ability to ignore the ads they see, and accelerate the development, use, and effectiveness of ad blockers and other devices that basically say, “oh no you don’t”. All of which will make what we do less effective.

So what do we do? Because if we’ve learned anything in advertising over the past hundred years it’s that anyone who bets against the technology will lose.

What we do is what smart agencies and smart clients have always done when faced with a cosmic leap in technology: use it with insight and imagination (often another way of saying “creativity”) to make work that people actually care about. That they think about when all those other things they don’t care about are avalanching them. It’s as simple – and as difficult – as that.

Who said advertising wasn’t rocket science?

I Wish That I Knew What I Know Now: Career advice from people more successful than me

We like to think we have a master plan. We like to think life is linear. We like to think we know what we’re doing while we’re doing it. But we also know that pretty much none of this is true. The number of times we look back on our lives and think “If I’d only…” or “I should have…” or even “What in the name of God was I thinking…?” are, unfortunately, more than we would like to admit to.

So when a journalist asked me “What’s the one piece of career advice you wish you’d gotten when you were first starting out?”, I was certain I would be able to regale her with memories, aphorisms, witticisms and other bon mots that would make me the Oscar Wilde of our age.

I was wrong. I had nothing.

Oh sure, there were things like “Buy Google when it IPOs at $85 in 2004.” Or “Your good relationship with the client does not extend to telling him what you think of his karaoke.” Or even “The flight for the big presentation is at 4, not 430.” But nothing I could really use, nothing I wanted to affix my name to in public (like I’ve just done here. Ahem. Oh well…).

So I passed the buck. I reached out to some of my closest friends — and to some folks I wished were my closest friends — for their two cents. What career advice did they wish they’d had way back when we were all young and firm and comparatively debt-free and able to bounce back from all-nighters with a staggering effortlessness?

What I got was a lot more than I bargained for. Apparently my friends have lots of opinions. And they’re not shy about sharing them. And while the journalist seems to have disappeared as effectively as a late inning lead by my beloved White Sox, the advice I ended up with still remains. And it’s still valuable. And a lot of it had to do with warning their young selves about the future.

“Plan on the inevitability of middle age and age-related obsolescence” said my buddy the designer Gary Hudson, who was not alone in this admonition. And while few were complaining (okay, some were complaining — this is advertising, after all), they were still making it clear that they would have liked to have been made aware of what the future looked like so they could have planned for it. Because you know how good people in advertising are at planning.

And speaking of planning, it was also interesting how many talked about relationships, about how they wished they had made more of an effort to stay connected to people. Not purely from a business networking standpoint (although to be sure there was a lot of that. Like Contagious’s Paul Kemp-Robertson who explained “I must have applied for 500 jobs via the usual listings and recruiters, but I got my first break because I freelanced with someone who just happened to know someone who was setting up a new venture and needed eager young fools to work for free.”) but from a quality-of-life standpoint. MUH-TAY-ZIK | HOF-FER’s John Matejcyzk said “I’ve met so many great people along the way who I’m no longer in touch with. Kinda sad.” And Co:Collective’s Tiffany Rolfe echoed that sentiment, saying “I wished I had done even more of this rather than only focusing on my work and being too busy.”

Of course, “focusing on the work” came in for a large does of career advice, to be sure. The idea that there’s a lot to do, a lot of competition to do it, and a lot of opportunity to piss it all away. “Persistence creates luck and put the fucking time in” was what illustrator Hal Mayforth advised. Leo Burnett’s Director of Talent Acquisition Debbie Bougdanous expressed a similar sentiment, but put it in a way that perhaps is more befitting her position: “Always be the last person to leave. Ask anyone if they need help before you leave at night. Those people always seem to do well.”

Where exactly you put in that effort, however, was also extremely important, and there were a number of people who echoed McCann’s Rob Reilly’s career advice (“Don’t chase the titles or money. Chase the work. The title and money follow.”). And while I completely understood the sentiment — cash is fleeting, but the Alex Bogusky-Rob Reilly-Dan Weiden seal of approval on your resume lasts a lifetime — as someone who has taught literally hundreds of kids who are emerging from universities under mountains of debt, I wondered how realistic it was for anyone starting out today. Because it’s not about telling these kids to suck it up and eat ramen noodles for a couple of years while forgoing the flat for your parents’ basement. It’s about them literally not being able to afford to take the job at the better shop, unless someone is subsidizing them.

And maybe that sounds a little harsh, but honestly, the career advice itself was full of hard — and valuable — truths like that. Like Miami Ad School’s Hillary Lannan, who reminded me that we’re not as precious as we think we are and that the sooner we understand it, the better our careers will be. “We’re all replaceable” she said. “No one cares about you having your job as much as you do.” Oh, if I’d only known that when I was in my twenties…

And still the advice pours in. From people I emailed months ago. From people who already gave me advice and are giving me more. From friends of people who heard about my question and want to weigh in. Good advice. Great advice. Weird advice. Terrible advice.

And, perhaps the best career advice of all, which came from Ogilvy’s George Tannenbaum — “The advice should be, don’t listen to advice.”

Thanks to everyone who took time out of their busy days to provide me with valuable input and insight. And stay tuned, as invariably more career advice is on the way.

Rumours of My Demise: Some thoughts on the value of an ad campaign

My buddy Mark Dimassimo was pissed. He’d been watching an inordinate amount of tennis and he’d reached his limit with the constant repetition of ads. Not that the ads were bad the first five, ten or twenty times he saw them. But around the hundredth time he was subjected to the same, singular “tennis” ad that each company had deigned to produce in order to be “relevant” during the tournament, he was, as I could tell form his tweets, texts and messages, about ready to hurl something toxic and large at his television machine.

And brother, I can relate.

Read More

If you Teach a Man to Fish: Why “fishing where the fish are” is not enough

For as long as I’ve been doing this, people have been telling me that the surest path to marketing success was to always be “fishing where the fish are.” That is, put your message where the people you want to reach, are.

And that makes sense, right? If you’re talking to motorcycle riders, advertise where motorcycle riders are. If you’re talking to Moms, put your message where Moms are. And if you’re talking to Moms who ride motorcycles, well, you get the idea.

And this advice has served advertisers and their agencies for centuries. I would bet that if you dug deep enough into Pompeii’s ashes you would find an ancient Roman vellum purporting delivering this aphorism in some Latinate version of corkscrew advertising-ese.

Read More

I Should Have Known Better: The real value of improvement

It’s mindboggling sometimes to think about how much has changed in the advertising industry. And while I’m sure every generation has said the same thing, the simple fact is that they were wrong and we are right. No generation of creatives, account people and clients have had to manage as much disruption as we have in terms of media, demographics, economics… my god, the list is endless. 

In fact it’s so insane that I have been on something of a crusade to find the truisms that our era has NOT rendered obsolete. The ones we can still rely upon – at least until someone invents something new this afternoon. And thankfully, many of them still are true. Like the one about focusing on what the customer needs over what you want to sell them. And a couple of others too. 

Read More

The Madness of the Method: Why process matters

You walk into a meeting with a new client or a new agency, and you’re in that honeymoon phase when everyone is attentive and polite and laughs at each other’s stupid stupid jokes. And you’re there so you can discuss “the process”, the way you’re all going to get the work done. The “great” work done. The great work we’re all going to be proud of. Together.

Whereupon someone, usually mid-level, begins to describe something that has so many damn moving parts, so many checks and balances, so many org charts with dotted lines that seem to lead to other org charts with still more dotted lines, that you can’t imagine yourself actually doing any of it. And you pray to god that no one actually does.

Read More

Keep the Customer Satisfied: The differences between B2B & B2C advertising

Ask a B2B shop about their B2C cousins and invariably you will hear something like this: “B2C agencies are a bunch of undisciplined, overpriced children who should stay away from the serious business of B2B marketing and leave it to the adults before they do some real damage.”

Ask a consumer agency a similar question and they’ll invariably reply: “B2B shops should keep their second-rate versions of ideas that they stole from outdated back issues of B2C awards annuals and leave the creativity to the real agencies – the ones who do the consumer marketing that those B2B shops only wish they could still do.”

Read More

The Kids Are All Right

“[They are] a generation of coddled infants who developed into demanding tyrants.”

I can’t walk into a brainstorm, client meeting, focus group, or marketing conference without hearing people complain about Millennials. “They expect everything now.” I hear again and again. “They want their jobs to revolve around their schedules. They’re not as committed as we were. They don’t know the value of hard work. They’re spoiled babies who refuse to grow up. And they all expect to be paid like millionaires.”

All of which I would be happy to ignore or agree with or whatever in order to still invoice the gig, if it were not for two extremely important facts.

Read More

How to Client

There are literally thousands of books that will tell you how to manage a brand. And there are at least that many that will tell you how to run a company. Put those together and you might have the number that will tell you how to take care of the people you’re leading to do both of those things.

I know this because in addition to making advertising and teaching it, I review books on it at The Agency Review. And every time I think I’ve read all that there are, the mailman shows up with a dumpster full of new ones and drops them on my desk.

But there’s one important part of being successful – especially in the marketing space – that these books are frustratingly silent on and it’s this: How to client.

Read More

Who Are You?: The case for a creative director

We all know what an art director does, right? They make the pictures - and in a society that is as visually obsessed as ours is, that’s clearly a pretty important job.

And we all know what copywriters do too, right? They come up with the words that no one reads except the lawyers and the brand managers.

But creative directors? They don’t write – though they may have once. They don’t design, though they may have once. And they sure as hell don’t code. So just exactly what do they do, and more importantly, why the hell are you paying them?

What they do - and what you are actually paying them to do whether you realize it or not – is to be the bridge between the problem you have and the solution you pray people you don’t understand will come up with.

Read More

Pleased to Meet You, Won't You Guess My Name?

My friend Dave Marinaccio likes to say that even bad advertising works better than no advertising. And he’s right, of course. For as Woody Allen famously said, 80% of success is just showing up – and advertising, in one sense, is simply about showing up when your competitor does not.

What Dave doesn’t mention about bad advertising is that “showing up” is about all it has going for it. It’s sort of like drunk-dialing your x-girlfriend. Yes, you’re making yourself top of mind with her (awareness!) and you’re occupying her thoughts to the exclusion of everyone else (attention!) – but you’re also rambling and mumbling and cursing and vomiting and being fairly incoherent. But hey! You’re showing up!

Read More

Talking ‘bout My Generation: What Demand Generation Means Now

You remember demand generation, right? Born in that first mythical golden advertising age, when Claude C. Hopkins and Albert Lasker strode the earth, demand generation emerged when clients had products found themselves saddled with unsellable products. Products that they brought to agencies, saying, “I don’t know what to do with this. You think you can come up with a reason for humans to buy it?”

Read More