Classic Season 5

3h 32m 54s
English
Paid
October 24, 2024

Lesson 1

In the "What Goes in Active Records" series (part 1 and part 2), we looked at some design constraints for what goes in ActiveRecord models. Sometimes, these constraints can lead to very small classes being extracted, which often feels awkward. This screencast looks at one such class: a single line of service code with an alarmingly long test file. By creating a new value class and tightening the service's interface around it, we shorten the tests slightly.

More

Then, by collapsing the service into the new value class, we shorten them even more. We're left with tests that are easier to reason about, and with a new abstraction reified in the code. Finally, although this isn't stated in the screencast, the tests are fully isolated from Rails after this refactoring, whereas the original tests integrated with it. The new tests are about eight times faster (and will remain fast even as the application grows).

Lesson 2

This is a continuation of the previous screencast, "Collapsing Services into Values". We'll transform the Subscription value object we created into a database table and ActiveRecord model.

Although creating the value object in the last screencast did clean up the system, it still left the User class with a lot of knowledge about subscriptions. User still contained their validations, and in a complete system it would have knowledge about how to create subscriptions from itself and update itself from subscriptions. When we extract the subscriptions into their own table and model, this knowledge disappears from User entirely, although it does re-raise the question we started with: where should the logic go?

Lesson 3

Rubinius is a Ruby implementation known for being "written in Ruby", although that's not entirely true since it does have a large VM written in C++. We'll start off by briefly looking at the structure of Rubinius, focusing on the load order and object system bootstrapping. Then we'll remove a feature from it by updating all of the source files that reference that feature, then verifying the change both by running the tests and by visual inspection.

Note: At the very beginning, I misspeak and say that Class's superclass is Class. This isn't true: Class's superclass is Module, and the class of Class is Class, which is what's being set up in the line in question. Object system bootstrapping is hard to keep straight!

Lesson 4

Many people assume that Python and Ruby have similar object systems. That's sort of true, in that they have roughly the same level of dynamic expressiveness, but the way that they achieve it is actually quite different. We'll compare the two systems, focusing on one very fundamental division between them: Python deals with attributes, but Ruby deals with methods, with each implementing one in terms of the other. This leads to some characteristic properties of the language: Python's consistency and focus on correctness vs. Ruby's terseness for defining and calling methods.

Lesson 5

Imagining the most naive possible Rails app, it would probably do a lot of data validation at the controller level. We don't do that, of course: we push the data integrity responsibility down to ActiveRecord, via validations. Unfortunately, ActiveRecord validations aren't good enough either: there are a few ways that they can be sidestepped, and those ways will eventually come up in many real-world apps. This screencast looks at those sidestepping mechanisms, the problems they create, and how to solve them by using real database constraints.

Lesson 6

Hard coupling—putting the name of one class inside of another—is a problem for both design and testing. It makes adjusting the objects' boundaries harder because the static names must be changed and different dependencies can't be swapped in for those hard-coupled points. It makes testing harder because you can't test one object without it invoking the other, so focusing a test is difficult. Dependency injection can help with this, but it's really a special case of a more general principle: separating the arrangement of a program's pieces from the work that they actually do.

This screencast is an example of separating arrangement of work, but not by dependency injection. Instead, we separate the data flow between objects from the objects themselves, which eventually allows us to convert to a simple actor-based concurrency model in one smooth transition.

Lesson 7

Primitive obsession is the use of primitive values—integers, strings, arrays, hashes, etc.—when a more specialized, domain-relevant object would provide a better design. Rather than discuss the idea abstractly, this screencast is a concrete example: we examine Destroy All Software's Screencast class, then replace it throughout the system with a simple hash. At the end, we review the changes to get a sense of what primitive obsession does to a design.

Note: As mentioned in the screencast, no tests are run or touched. At over 15 minutes long, this screencast is well on the high end of DAS lengths and test maintenance would've increased that. As a result, at least one mistake is made: the Screencast.slug method should've taken a screencast and computed the slug from it. This doesn't impact the design analysis, but certainly reaffirms the importance of testing.

Lesson 8

This screencast presents a method for writing isolated tests without using stubs or mocks. We'll explicitly separate the value part of an object—its instance variables—from the behavior part—its methods. Then, when testing other classes, we can integrate them only with the value part, as exposed by the accessor methods.

We avoid the danger of mocks and stubs going out of sync with the code being tested, since we're integrating with real accessor methods that will exist in the final object. We also avoid the danger of accidentally calling complex methods that shouldn't be under test: since we only test against the data part of the object, there's no risk of integration.

This method isn't universal. It falls flat on objects that are heavy on behavior and light on data. But it is one way to test against commonly-used, data-heavy classes (in the case of the Destroy All Software codebase that we work on here, the Screencast class).

Lesson 9

This screencast demonstrates a refactoring through three and a half paradigms. First, we see the code in imperative form: code that mutates data, with the code and data being separated. Then we merge some of the data and code to form an object to get object oriented code: code and data mixed, with mutation. We quickly look at a variant of this where the object is only allowed to have pure functions (no mutation or IO). Finally, we remove the object, leaving only the functions, which gives us a more standard functional solution.

Lesson 10

We'll start by translating a bug report into a test, providing an objective first-pass validation of any fix we come up with. Then we'll push down into the system using tests as a guide: each test we write will be smaller and more focused. To simulate ignorance of the system, we'll avoid looking at production code until we've gotten to the very bottom of the stack. To simulate a complex bug where the stack trace doesn't indicate the full subtlety of interactions, we'll push down one step at a time instead of simply jumping to the deepest part of the stack. When we get down to the defect itself, we can then run the tests we generated in reverse order, "popping stack" back to the system-level view.

Lesson 11

Before BDD and tools like RSpec took off, tests were often written in a "test case" style: they were phrased in the computer's terms. Well-written RSpec usually approaches testing from the human direction: instead of focusing on the software's terminology, the human-visible behavior is specified in English, and the examples map those English descriptions onto software terminology. In this screencast we'll refactor part of Hamster's test suite, translating it from a test case style to an example style. This will require many trade-offs, most notably trading completeness of test coverage in some corner cases for readability of test names.

Note: There's a bug in the mutation test that I missed during recording. Because RSpec's "let" variables are memoized, the "empty" value is only computed once. If it were mutated, both references to "empty" would point to the mutated value, defeating the test. As pointed out by Myron Marston, the test would even pass for Ruby's Array class, which clearly mutates. Unfortunately, mistakes like this are possible when validating a test by breaking the test itself, rather than by breaking the production code.

Lesson 12

Most of the software running on our machines is written in C—the operating system, our VMs and compilers, the Unix shell and its various utilities, and our editors. This screencast briefly introduces a project written in C, focusing on its unit tests and the ways in which its design is similar to OO. This is not a tutorial on C—programming in it effectively requires a lot of learning. But, as demonstrated here, many ideas and practices used in more modern languages do apply directly to programming in C.

Note: there's at least one pointer bug visible in the code shown here ("trie_create" incorrectly zeroes the "values" field instead of the entire trie). Thanks to Leo Cassarani for pointing this out.

Lesson 13

When analyzing a system's behavior or performance, there's a huge spectrum of tools available: everything from the load average—which boils a machine down to a single number—to DTrace and STrace, which can provide very fine-grained information. We'll briefly look at the load average and why it's not sufficient. Then we'll look at one particular method that's in between the two extremes: the /usr/bin/time command, which can report many statistics about program execution. We'll use it to compute the number of involuntary context switches in each of Destroy All Software's Cucumber scenarios, giving us a starting point for localizing an anomaly in the system's behavior.

Note: Some questions have been raised about the exact details of load average accounting. Unfortunately, I'm not an expert on Linux kernel internals, so I'm going to resist the urge to try to correct myself. To be brief: I may have significantly overstated the impact of IO on load average accounting. In any case, the larger point is unaffected: load average is at one extreme of the continuum of performance analysis tools and, once you get past the first few seconds of performance analysis, you'll need to dig deeper.

Lesson 14

In this screencast, we use Ruby's remarkable flexibility to actually implement the theoretical actor syntax shown in "Separating Arrangement and Work". We start at the theoretical syntax, then work forward to a working system: get it to parse; get it to run; and then get it to do the work that we want it to do. This highlights the flexibility of Ruby, as well as serving as a simple explanation of the actor model.

Lesson 15

In most Destroy All Software screencasts, the tests are run synchronously: the test process is forked from the editor and blocks the terminal. This screencast shows another option: running tests asynchronously in another split. First, we use tmux, which is a common approach. Then, we do it using Unix primitives: a named pipe to communicate between the two windows, and a shell script to manage the communication.

Lesson 16

This screencast provides a laundry list of test recommendations. Some of them have been mentioned in passing in other screencasts, but the whole group has never been presented together. First, recommendations about general test design: 1) separate unit and integration tests, and 2) use alternate constructors to simplify test object creation. Then, recommendations about using test doubles like stubs and mocks: 3) name your stubs for debugging and clarity; 4) don't stub primitives; 5) listen to test setup complexity, even if it's not immediately obvious; and 6) only stub things that you trust.

Lesson 17

Several Destroy All Software Screencasts have shown ways to "fight" web frameworks like Rails, avoiding its design primitives. Usually these focus on using service objects instead of controllers and models. This screencast demonstrates a refactoring where Rails primitives do provide the best design after all. We start with two controllers, refactoring them to move behavior out. At the end, the controllers only do delegation and translation to and from HTTP.

Note: There's a mistake in the "Account.create_with_schema" method: it should take the account parameters from the controller. This doesn't affect any of the content of the screencast, fortunately. 


Watch Online Classic Season 5

Join premium to watch
Go to premium
# Title Duration
1 Collapsing Services Into Values 11:11
2 Splitting Active Record Models 10:37
3 Removing a Rubinius Feature 11:41
4 Python vs. Ruby Objects 08:56
5 Where Correctness Is Enforced 09:33
6 Separating Arrangement and Work 10:52
7 Primitive Obsession 15:49
8 Isolating by Separating Value 11:06
9 Imperative to OO to Functional 13:22
10 Debugging With Tests 09:47
11 Test Cases vs. Examples 14:00
12 A Bit of C 11:03
13 Analyzing Context Switches 11:47
14 Actor Syntax From Scratch 14:35
15 Running Tests Asynchronously 08:18
16 Test Recommendations 16:22
17 When Rails Is Right 10:47
18 A Day in The Life 13:08

Similar courses to Classic Season 5

Practical TLS

Practical TLSPractical Networking (practicalnetworking.net)

Duration 13 hours 24 minutes 49 seconds
The Complete Guide to Becoming a Software Architect

The Complete Guide to Becoming a Software Architectudemy

Duration 5 hours 44 minutes 32 seconds
Advanced SAAS Sales Course

Advanced SAAS Sales CourseProdigies University

Duration 1 hour 31 minutes 8 seconds
Master Gorgeous UI Design

Master Gorgeous UI Designtogether.art (Pablo Stanley)

Duration 5 hours 25 minutes 9 seconds
Agile Business Analysis

Agile Business Analysisudemy

Duration 1 hour 35 minutes 36 seconds
Bulletproof SAAS Offer

Bulletproof SAAS OfferProdigies University

Duration 2 hours 32 minutes 7 seconds