Effective Enterprise Java: Architecture

Download Report

Transcript Effective Enterprise Java: Architecture

The Fallacies of Enterprise Systems

Ted Neward http://www.neward.net/ted

Credentials

• Who is this guy?

– Independent Consultant, Mentor, Architect – Author • Server-Based Java Programming (Manning, 2000) • Effective Enterprise Java (Addison-Wesley, 2004)C# in a Nutshell (with Drayton, Albahari; OReilly, 2001) • SSCLI Essentials (with Stutz, Shilling; OReilly, 2003) – Instructor, DevelopMentor • .NET

• Java – Papers at www.neward.net/ted/Papers – Weblog at www.neward.net/ted/weblog

The Fallacies of Enterprise Computing

– 1) The network is reliable – 2) Latency is zero – 3) Bandwidth is infinite – 4) The network is secure – 5) Topology doesn't change – 6) There is one administrator – 7) Transport cost is zero – 8) The network is homogeneous – 9) The system is monolithic – 10) The system is finished

“The network is reliable”

• Hardware fails – Routers go down, wires are cut (sometimes catastrophically), power spikes, hurricanes, … – Sometimes it’s even as simple as “who turned off the server?” • Software fails – Processes throw exceptions when they shouldn’t need to, or hang, or … – … or sometimes you get hacked • Physics fails – Not very often (we hope), but signal just doesn’t travel the wireless airwaves like it should

Don’t assume reliability

Assume that at any point during

remote communication, the network can simply “go away” for no reason

• Code appropriately: timeouts, retries, backups, and so on

“Latency is zero”

• Bits take time to move through the networking layers and physical hardware – And remember, they need to do it lots of times (once per intermediary)!

– Even fast networks are orders of magnitude slower than slow PC buses

Count the network time

Be frugal in passing data across the

network; the more data passed, the longer it’ll take for it all to get there

– Remember, TCP/IP tries to “guarantee” delivery of all of those packets, which grows steadily more difficult with a larger number of packets

“Bandwidth is infinite”

• A T-1 line’s “phat pipe” gets saturated pretty quickly in the heavy websites, … – Once we throw Web services into the mix, expect the bandwidth demands to double or triple – Once “everything goes over one wire”, expect the available bandwidth for your application to be a fraction of what it is now – Remember, laying down new wire (fiber-optic) is an exercise in digging up your street… • Developers frequently write code on small lightly-congested LANs or standalone machines/laptops – But Production looks a lot different than a standalone laptop…

Don’t send more than you need to send

Be frugal with the amount of data

you send across the wire; send only that which can’t be cached

– Ironically, this argues against the browser-based application, since half the data sent is presentation information; hence the rise of the “smart client”

“The network is secure”

• “Developers are competent” – Not always… how much do you know about network security? How about your coworkers, including Harvey The Intern?

• “Remote data can be trusted” – TCP/IP packets themselves can be spoofed as to their source – Major impetus for IPv6 and other next-generation Internet efforts • “Remote system can be trusted” – Even if it could at one point, how do you know it hasn’t been hacked since then?

• “It’ll never run outside of our firewall” – Lots of people carrying laptops, PDAs, Blackberrys… – Lots of wireless networks going up…

Assume insecurity

Remember that any application listening to the

the one you wrote, and Telnet, the hacker’s best friend

– If you assume that every byte that comes in off the network has to go through a 12-step recovery program before being used anywhere in your program, that’s a good start – If you find yourself arguing crypto key bit size with another developer, you’re arguing over the size of the vault door on your tent – If you find yourself trusting firewalls to take care of your security needs, please don’t work for my company

“Topology doesn’t change”

• Topological changes sometimes happen without planning – Hardware failures, software failures, natural disasters, and so on – Don’t forget natural (Hurricane Ivan & friends) or manmade (9/11) disasters!

• The code could run on a laptop (or PDA!) that gets carried from hotel to hotel • The network could be a wireless one, where nodes are constantly coming & going – or worse, it’s a combination of wired & wireless • The code could also be “upgraded” (or “downgraded”) to run in an entirely different environment than you developed to

Make use of the layers of indirection

Networking frequently makes available

“layers of indirection” to keep physical hardware topology somewhat hidden; use it

– This means DNS, NAT, and so on – Some programming models provide one (JNDI) – Consider peer-to-peer tools (WS-Discovery, UDP/IP, Multicast, …) to help keep track of topological changes

“There is one administrator”

• “… and he will never quit, get hit by a bus, or take a vacation” – Believe it or not, even hard-core sysadmin geeks like to get away from the computer once in a while – Maybe even date!

• “But we control both ends” – For now, perhaps, but what happens if your app is wildly successful? Or your company buys a competitor? Or is bought? Or partners up?

Make the system administrator-friendly

At any point, a relatively competent

system administrator should be able to use and/or monitor and/or diagnose the system

– Make use of O/S management facilities – Build in the management/administrative functionality that isn’t otherwise handled (adding/removing users, finding “lost” records, and so on)

“Transport cost is zero”

• Pointers don’t travel well – So networking stacks spend a lot of time shuffling bits into a stream of bytes that can be sent across the wire – Process is called marshaling, and it’s not a free action • Both Java and .NET Remoting use Serialization to do the marshaling • Web services have to marshal/unmarshal to XML

Understand what you’re sending, and what it costs

Measure the full cost of sending data

across the wire by measuring the full cost of marshaling

– Either recreate the marshaling (by serializing all the parameters and back) – Or watch the data go across the wire – Or measure with a profiler

“The network is homogeneous”

• Not even my home network is homogeneous— Windows & Mac OS/X – Most (all?) companies are also a mixture – Originally an argument for “why Java” – But along came .NET… and Ruby… and … – Never mind legacy C/C++, COBOL, … • Even if it is today, there’s tomorrow – And the inevitable partnerships, buyouts, mergers, and other corporate activities – You can run, but you can’t hide

Build systems that don’t assume a specific platform

When designing systems, never

assume it will always be “X” at both ends

– Stick to well-known technologies at the edges of your component boundaries – When you do interop, prefer to do so at remoting & component boundaries

“The system is monolithic”

• “Oh, sure, we can make that change; we just have to roll out new stubs…” – Even if you control both ends today… – … tomorrow brings corporate change • “What do you mean, I broke your mission critical app? Who are you again?” – Remember that databases take on a life of their own, as well; you may not own your own database instance (or its schema) after release

Clearly delineate your component boundaries

Design the system to make it explicit

which parts are tightly-coupled and would network out of those atoms as much as possible

– If the component doesn’t straddle the network, it’s much easier to control – Ask yourself, “How do we react if the schema and code need to evolve independently?” – Prefer to define contracts, not shared types

“The system is finished”

• The only time a system is “finished” is when the server is shut down, the source worked on it are all Terminated—With Extreme Prejudice!

– Otherwise, systems will be “reborn” in another project because “we need something that does just what XYZ does, only…” – Even if you’ve left the company, the code you wrote takes on a life of its own

Build systems to last

Create systems with “hook points”

that allow future programmers to slip in variation without significant rip and-tear across the codebase; eschew today’s “flavor of the month” technology

– Keep designs simple and interface- or protocol-based

Summary

Everybody has made these mistakesNo shame in admitting it! • Learn from the mistakes – Recognize when you’re falling into the trapsAvoid the implicit assumptions during design • Hold design reviews against the fallacies – Be aggressive in stamping them out

Questions

?