tag:ryankinderman.net,2005:/tag/testingkinderman.net : Tag testing, everything about testingcode code code2009-11-04T16:58:32-08:00Typotag:ryankinderman.net,2005:Article/1102009-11-01T19:36:22-08:002009-11-04T16:58:32-08:00ryanIntroducing Spectie, a behavior-driven-development library for RSpec<p>I'm a firm believer in the importance of <a href="http://kinderman.net/articles/2007/11/18/testing-on-high-bottom-up-versus-top-down-test-driven-development">top-down</a> and behavior-driven development. I often start writing an integration test as the first step to implementing a story. When I started doing Rails development, the expressiveness of Ruby encouraged me to start building a DSL to easily express the way I most-often wrote integration tests. In the pre-<a href="http://rspec.info/">RSpec</a> days, this was just a subclass of ActionController::IntegrationTest that encapsulated the session management code to simplify authoring tests from the perspective of a single user. As the <a href="http://dannorth.net/introducing-bdd">behavior-driven development</a> idea started taking hold, I adapted the DSL to more-closely match those concepts, and finally integrated it with RSpec. The result of this effort was Spectie (rhymes with necktie).</p>
<p>The primary goal of Spectie is to provide a simple, straight-forward way for developers to write BDD-style integration tests for their projects in a way that is most natural to them, using existing practices and idioms of the Ruby language.</p>
<p>Here is a simple example of the Spectie syntax in a Rails integration test:</p>
<pre><code>Feature "Compelling Feature" do
Scenario "As a user, I would like to use a compelling feature" do
Given :i_have_an_account, :email => "ryan@kinderman.net"
And :i_have_logged_in
When :i_access_a_compelling_feature
Then :i_am_presented_with_stunning_results
end
def i_have_an_account(options)
@user = create_user(options[:email])
end
def i_have_logged_in
log_in_as @user
end
def i_access_a_compelling_feature
get compelling_feature_path
response.should be_success
end
def i_am_presented_with_stunning_results
response.should have_text("Simply stunning!")
end
end
</code></pre>
<h1>Install</h1>
<p>Spectie is available on <a href="http://github.com/ryankinderman/spectie">GitHub</a>, <a href="http://gemcutter.org/gems/spectie">Gemcutter</a>, and <a href="http://rubyforge.org/projects/kinderman/">RubyForge</a>. The following should get it installed quickly for most people:</p>
<pre><code>% sudo gem install spectie
</code></pre>
<p>For more information on using Spectie, visit <a href="http://github.com/ryankinderman/spectie">http://github.com/ryankinderman/spectie</a>.</p>
<h1>Why not Cucumber or Coulda?</h1>
<p>At the time that this is being written, Cucumber is the new hotness in BDD integration testing. My reasons for sticking with Spectie instead of switching to <a href="http://github.com/aslakhellesoy/cucumber">Cucumber</a> like the rest of the world are as follows:</p>
<ul>
<li>Using regular expressions in place of normal Ruby method names seems like a potential maintenance nightmare, above and beyond the usual potential.</li>
<li>The layer of indirection that is created in order to write tests in plain text doesn't seem worth the cost of maintenance in most cases.</li>
<li>Separating a feature from its "step definitions" seems mostly unnecessary. I like keeping my scenarios and steps in one file until the feature becomes sufficiently big that it warrants extra organizational consideration.</li>
</ul>
<p>These reasons are more-or-less the same as those given by Evan Light, who recently published <a href="http://github.com/elight/coulda">Coulda</a>, which is his solution for avoiding the cuke. What sets Spectie apart from Coulda is its reliance on and integration with RSpec. The Spectie 'Feature' statement has the same behavior as an RSpec 'describe' statement, and the 'Scenario' statement is the same as the RSpec 'example' and 'it' statements. By building on RSpec, Spectie can take advantage of the contextual nesting provided by RSpec, and rely on RSpec to provide the BDD-style syntax within what I've been calling a scenario statement (the words after the Given/When/Thens). Coulda is built directly on Test::Unit. I'm a firm believer in code reuse, and RSpec is the de facto standard for writing BDD-style tests. Spectie, then, is a feature-driven skin on top of RSpec for writing BDD-style integration tests. To me, it only makes sense to do things that way; as RSpec evolves, so will Spectie.</p>tag:ryankinderman.net,2005:Article/1092008-11-14T15:33:42-08:002008-11-14T15:33:42-08:00ryanRails Plugin for Mimicking SSL requests and responses<h1>The Short</h1>
<p>I've written a plugin for Ruby on Rails that allows you to test SSL-dependent application behavior that is driven by the ssl_requirement plugin without the need to install and configure a web server with SSL.</p>
<p><a href="http://github.com/ryankinderman/mimic_ssl">Learn more</a></p>
<h1>The Long</h1>
<p>A while back, I wanted the <a href="http://selenium.openqa.org/">Selenium</a> tests for a <a href="http://www.rubyonrails.org/">Ruby on Rails</a> app I was working on to cover the SSL requirements and allowances of certain controller actions in the system, as defined using functionality provided by the <a href="http://github.com/rails/ssl_requirement">ssl_requirement</a> plugin. I also wanted this SSL-dependent behavior to occur when I was running the application on my local development machines. I had two options:</p>
<ol>
<li><p>Get a web server configured with SSL running on my development machines, as well as on the build server.</p></li>
<li><p>Patch the logic used by the system to determine if a request is under SSL or not, as well as the logic for constructing a URL under SSL, so that the system can essentially mimic an SSL request without a server configured for SSL.</p></li>
</ol>
<p>Since I had multiple Selenium builds on the <a href="http://cruisecontrolrb.thoughtworks.com/">build server</a>, <a href="http://www.subelsky.com/2007/11/testing-rails-ssl-requirements-on-your.html">setting up an SSL server</a> involved adding a host name to the loopback for each build, so that Apache could switch between virtual hosts for the different server ports. I also occasionally ran web servers on my development machines on ports other than the default 3000, as did everyone else on the team, so that we'd all have to go through the setup process for multiple servers on those machines as well. We would need to do all of this work in order to test application logic that, strictly speaking, didn't even require the use of an actual SSL server. Given that the only thing that I was interested in testing was that the requests to certain actions either redirected or didn't, depending on their SSL requirements, all I really needed was to make the application mimic an SSL request.</p>
<p>To mimic an SSL request in conjunction with using the ssl_requirement plugin without an SSL server consisted of patching four things:</p>
<ol>
<li><p><code>ActionController::UrlRewriter#rewrite_url</code> - Provides logic for constructing a URL from options and route parameters</p>
<p> If provided, the <code>:protocol</code> option normally serves as the part before the <code>://</code> in the constructed URL.</p>
<p> The method was patched so that the constructed URL always starts with "http://". If <code>:protocol</code> is equal to "https", this causes an "ssl" key to be added to the query string of the constructed URL, with a value of "1".</p></li>
<li><p><code>ActionController::AbstractRequest#protocol</code> - Provides the protocol used for the request.</p>
<p> The normal value is one of "http" or "https", depending on whether the request was made under SSL or not.</p>
<p> The method was patched so that it always returns "http".</p></li>
<li><p><code>ActionController::AbstractRequest#ssl?</code> - Indicates whether or not the request was made under SSL.</p>
<p> The normal value is determined by checking if request header <code>HTTPS</code> is equal to "on" or <code>HTTP\_X\_FORWARDED_PROTO</code> is equal to "https".</p>
<p> The method was patched so that it checks for a query parameter of "ssl" equal to "1".</p></li>
<li><p><code>SslRequirement#ensure\_proper\_protocol</code> - Used as the <code>before\_filter</code> on a controller that includes the ssl_requirement plugin module, which causes the redirection to an SSL or non-SSL URL to occur, depending on the requirements defined by the controller.</p>
<p> This method was patched so that, instead of replacing the protocol used on the URL with "http" or "https", it either adds or removes the "ssl" query parameter.</p></li>
</ol>
<p>For more information, installation instructions, and so on, please refer to the plugin directly at:</p>
<p><a href="http://github.com/ryankinderman/mimic_ssl">http://github.com/ryankinderman/mimic_ssl</a></p>tag:ryankinderman.net,2005:Article/1082008-04-09T19:53:51-07:002008-04-09T19:53:51-07:00ryanEnabling/disabling observers for testing<p>If you use ActiveRecord observers in your application and are concerned about the isolation of your model unit tests, you probably want some way to disable/enable observers. Unfortunately, Rails doesn't provide an easy way to do this. So, here's some code I threw together a while ago to do just that.</p>
<pre><code>module ObserverTestHelperMethods
def observer_instances
ActiveRecord::Base.observers.collect do |observer|
observer_klass = \
if observer.respond_to?(:to_sym)
observer.to_s.camelize.constantize
elsif observer.respond_to?(:instance)
observer
end
observer_klass.instance
end
end
def observed_classes(observer=nil)
observed = Set.new
(observer.nil? ? observer_instances : [observer]).each do |observer|
observed += (observer.send(:observed_classes) + observer.send(:observed_subclasses))
end
observed
end
def observed_classes_and_their_observers
observers_by_observed_class = {}
observer_instances.each do |observer|
observed_classes(observer).each do |observed_class|
observers_by_observed_class[observed_class] ||= Set.new
observers_by_observed_class[observed_class] << observer
end
end
observers_by_observed_class
end
def disable_observers(options={})
except = options[:except]
observed_classes_and_their_observers.each do |observed_class, observers|
observers.each do |observer|
unless observer.class == except
observed_class.delete_observer(observer)
end
end
end
end
def enable_observers(options={})
except = options[:except]
observer_instances.each do |observer|
unless observer.class == except
observed_classes(observer).each do |observed_class|
observer.send :add_observer!, observed_class
end
end
end
end
end
</code></pre>
<p>Include this in a Test::Unit::TestCase or 'include' in your RSpec configuration, whatever rocks your boat. Here's a stupid example:</p>
<pre><code>class SomethingCoolTest < Test::Unit::TestCase
include ObserverTestHelperMethods
def setup
disable_observers
end
def teardown
enable_observers
end
def test_without_observers
# ...
end
end
</code></pre>
<p>When you go to test the behavior of the observer itself, simply disable/enable like the following to disable/enable all observers except the one you're testing:</p>
<pre><code>class DispassionateObserverTest < Test::Unit::TestCase
include ObserverTestHelperMethods
def setup
disable_observers :except => DispassionateObserver
end
def teardown
enable_observers :except => DispassionateObserver
end
def test_without_observers_except_dispassionate_observer
# ...
end
end
</code></pre>tag:ryankinderman.net,2005:Article/1052007-11-18T18:13:21-08:002007-11-19T12:24:24-08:00ryanTesting on High: Bottom-up versus Top-down Test-driven Development<p>I recently talked to a number of Rails developers about their general approach to testing some new functionality they're about to code. I asked these developers if they found it to be more useful to start testing from the bottom-up or top-down. I suggested to them that, since Rails uses the <a href="http://en.wikipedia.org/wiki/Model-view-controller">MVC pattern</a>, it's easy to think of the view, or user interface, as the "top", and the model as the "bottom". Surprisingly, nearly every developer that I asked this question of answered that they prefer to start from the bottom, or model, and test upwards. <em>Nearly every one!</em> I expected that I'd get a much more mixed response than I have. In fact, I think that the correct place to start testing is <em>precisely</em> at the highest level possible, to reduce the risk of building software based on incorrect assumptions of how best to solve a user requirement.</p>
<h1>Bottom-up Testing</h1>
<p>Bottom-up testing implies bottom-up design in TDD. In bottom-up design, a developer would probably consider the high-level objectives and break them up into manageable components that interact with each other to provide the desired functionality. The developer thinks about how each component will be used by its client components, and tests accordingly.</p>
<p>The problem with the bottom-up approach is that it's difficult to really know how a component needs to be used by its clients until the clients are implemented. To consider how the clients will be implemented, the developer must also think about how those clients will be used by <em>their</em> clients. This thought process continues until we reach the summit of our mighty design! Hopefully, when the developer is done pondering, they can write a suite of tests for a component which directly solves the needs of its client components. In my experience, however, this is rarely the case. What really happens is that the lower-level components tend either to do too much, too little, or the right amount in a way that is awkward or complicated to make use of.</p>
<p>The advantage of bottom-up testing is that, since we're starting with the most basic, fundamental components, we guarantee that we'll have some working software fairly quickly. However, since the software being written may not be closely associated with the high-level user requirements, it may not produce results that are necessarily valuable to the user. A simple client could quickly be written which demonstrates how the components work to the user, but that's besides the point unless the application being developed is a simple application. In such a case, the bottom-level of components are probably close enough to the top-level ones that there is little risk involved in choosing either the bottom-up or top-down approach.</p>
<p>Unless you're writing a small application, the code is probably going to have to support unforeseen use cases. When this comes as a result of ungrounded assumptions about the software that's already been written, this can mean a lot of rework. I can tell you from experience, once you realize that your lower-level components don't fit the bill for the higher levels in the system, it can be quite a chore to go back and fix, remove, or replace all of that unnecessary or incorrect code.</p>
<h1>Top-down Testing</h1>
<p>Top-down testing implies top-down design in TDD. Following the top-down approach, the developer will pick the highest level of the system to be tested; that is to say, the part of the system that has the closest correlation to the user requirements. This approach is sometimes referred to as <a href="http://en.wikipedia.org/wiki/Behavior_driven_development">Behavior Driven Development</a>. Whatever it's called, the point is that you test the most critical parts of the application first.</p>
<p>Since software is often written for human users, the most critical parts usually involve the front-end as it relates to the value being provided by the system being developed. When testing from the top-down, the effort is the inverse of bottom-up testing: Instead of spending a lot of time thinking about how the components to be developed will be used by other components to be developed, the focus is on how the user needs to interact with the system. Testing involves proving that the system supports the required usability. For an application with a graphical front-end, this might involve testing for a minimal version of that front-end.</p>
<p>The disadvantage of top-down testing is that you can end up with a lot of <a href="http://blog.caboo.se/articles/2006/01/12/mocking-net-http-get">stubbed</a> or <a href="http://en.wikipedia.org/wiki/Mock_object">mocked</a> code that you then have to go back and implement. This means it might take longer before you have software that actually does something besides pass tests. However, there are ways that you can minimize this sort of recursive development problem.</p>
<p>One way to minimize the time between starting development of a feature and demonstrating functionality that is valuable to the user is to focus on a thin slice of the overall architectural pie of the application. For example, there may be a number of views that need to be implemented before the system provides some major piece of functionality. However, the developer can focus on one view at a time, or one part of the view. That way, the number of components that need to be implemented before the system does something useful is small; ideally, one component in each architectural layer that I need build out, and often times only a part of the overall functionality of each component.</p>
<p>Another way to minimize the amount of time before the system does something useful is to code a small bit of functionality without worrying about breaking the problem up into classes until you have some tested, working code to analyze. You can then use established methods for <a href="http://www.refactoring.com/">refactoring</a> to bring the code to an acceptable level of quality.</p>
<p>The advantage of top-down testing is that you write functionality that solves the most critical functionality first. This generally means starting development at a high level. When the system eventually does something besides pass tests, what it does will provide value to its users. Additionally, because development starts at a high level, the code that is written is based on the current understanding of the problem, and not on assumptions. This guarantees that the tests and code that are written are not superfluous.</p>
<h1>Conclusion</h1>
<p>The challenge with top-down testing is that you must be highly disciplined to ensure that the code you write is being refactored and is properly evolving into a cohesive domain model for the application. This is compared with bottom-up testing, where you start with the domain model and build your system around it. Either way, you're going to be refactoring code. The difference is in where the time in refactoring is spent. In my experience, when doing bottom-up testing, more time is spent correcting incorrect assumptions about how the domain model will be used than on actually improving code that already works to solve the user requirements. In order to avoid making assumptions about the code being written, it must be written at the level that is closest to providing actual value to the end-user. In so doing, the developer focuses on continuous refinement of code that already provides value, as opposed to speculative design and development.</p>tag:ryankinderman.net,2005:Article/1022007-10-11T22:01:19-07:002007-10-11T22:01:19-07:00ryanSelenium Core Bug and TinyMCE Anchor Tags<p>Today, I was trying to get Selenium to click an anchor tag that was created with the "link" plugin in a TinyMCE editor. I was able to verify that the link was present with something as simple as <code>verifyElementPresent('link=Link Text')</code>. However, when I tried calling <code>clickAndWait('link=Link Text')</code>, it gave a "Window does not exist" error. A quick Google search yielded the answer: <a href="http://jira.openqa.org/browse/SEL-417;jsessionid=amPwB2LotH9-TC9itt?page=com.atlassian.jira.plugin.system.issuetabpanels:changehistory-tabpanel;jsessionid=amPwB2LotH9-TC9itt">a bug in Selenium Core</a>.</p>
<p>When the TinyMCE "link" plugin creates a link that doesn't open in a new window, it sets the "target" attribute on the anchor tag to "_self". Selenium Core versions prior to 0.8.4 (which hasn't been released yet) don't respond to links with "target" set to "_self".</p>
<p>If you're doing Rails development and using the <a href="http://www.openqa.org/selenium-on-rails/">selenium_on_rails</a> plugin, it uses an old version of Selenium Core (0.7.something) as of this posting. To fix the anchor tag problem, I replaced the contents of the <code>selenium-core</code> directory under <code>vendor/plugins/selenium_on_rails</code> with that of the <code>core</code> directory of the <a href="http://www.openqa.org/selenium-core/download.action">Selenium Core 0.8.3 release</a>, then applied the patch described in the bug spec linked to above. This seems to have fixed the problem.</p>
<p>Hopefully this saves you all some time and muddling.</p>tag:ryankinderman.net,2005:Article/382006-11-23T21:51:48-08:002007-09-22T08:43:50-07:00ryanFailing Quickly When Testing For Performance<p>I was working with an algorithm today that I discovered had a bug that caused it to run for an unacceptable amount of time, hogging a lot of system resources in the process. Whenever I find a bug in a piece of code I'm working on, I write a failing unit test for it that defines the correct behavior. For this algorithm, I needed to define what an "acceptable amount of time" was in the test, and then test for that level of performance so that the test results were consistent across multiple computers with possibly differing resource loads and load fluctuations. I also needed to ensure that the test would fail as quickly as possible in the event that the algorithm did not perform as desired.
</p>
<p>The method containing the algorithm takes a string parameter such as "1-4, 23, 50-52", specified as user input and representing a range of numbers. It then generates an array of numbers; for the string previously mentioned, the array would contain the numbers 1, 2, 3, 4, 23, 50, 51, and 52. The method also takes an optional parameter for the maximum amount of numbers that would be acceptable for it to generate, since generating an array containing all numbers for a range string like "1-9999999999999" would send the generating system into <a href="http://alexrock.servebeer.com/burning-computer.jpg">epileptic fits, complete with bus lines frothing</a>. As you may have guessed, this was where the problem was: The method in question generated all of the numbers in the specified range string, <i>and then</i> it checked to see if the amount of numbers generated exceeded the specified maximum.</p>
<p>I needed to define an acceptable response time for a given maximum size of the generated array of numbers for my test. It seems to me that it should take the same amount of time for the algorithm to complete with a range for 10 numbers with a maximum resulting array size of 5 as it does with a range for 10 million, billion, or squigillion numbers with the same result size. Basically, when the algorithm determines that the given range will exceed the maximum, it should end. The challenge here is that different computers will have different timings to reach the maximum, so a reasonably-accurate system-specific timing expectation needed to be calculated.</p>
<p>For this purpose, I wrote a method that determines the range of acceptable response times for the algorithm, given a desired number count, maximum result size, and the number of sample timings to make, since timings will differ slightly from one invocation to another.
<div class="CodeRay"><pre><notextile><span class="CodeRay"><span class="r">def</span> <span class="fu">acceptable_timing</span>(number_count, result_size_limit, sample_count=<span class="i">10</span>)
timings = []
sample_count.times <span class="r">do</span>
generator = <span class="co">NumberGenerator</span>.new(<span class="s"><span class="dl">"</span><span class="k">1-</span><span class="il"><span class="idl">#{</span>number_count<span class="idl">}</span></span><span class="dl">"</span></span>, result_size_limit)
start_time = <span class="co">DateTime</span>.now
generator.numbers
end_time = <span class="co">DateTime</span>.now
timings << end_time - start_time
<span class="r">end</span>
<span class="fl">0.0</span>..average(timings) + standard_deviation(timings)
<span class="r">end</span></span></notextile></pre></div>
</p>
<p>The next challenge was testing the <code>numbers</code> method with a range string that represents a large set of numbers, but using the same <code>result_size_limit</code> that was used in the call to <code>acceptable_timing</code>. I decided that a range of 9999999 numbers was sufficiently large to determine that the timing was acceptable; after all, it <i>should</i> take the same amount of time with the same result size limit as if I were to use 100 numbers, right? However, the problem with using a set of 9999999 numbers is that, with the bug, the test will hang for an extremely long time and hog a lot of system resources. We want our tests to fail as fast as possible, and give a useful error message if and when that failure occurs.</p>
<p>To ensure that the test fails fast, I decided to launch a separate thread to call the method under test so that I can stop it as soon as it's determined that it's taking longer than the acceptable amount of time to return.
<div class="CodeRay"><pre><notextile><span class="CodeRay"><span class="r">def</span> <span class="fu">completes_within?</span>(threshold, &block)
start_time = <span class="co">DateTime</span>.now
thread = <span class="co">Thread</span>.new &block
<span class="r">while</span> <span class="pc">true</span>
<span class="r">if</span> !threshold.include?(<span class="co">DateTime</span>.now - start_time)
thread.kill
<span class="r">return</span> <span class="pc">false</span>
<span class="r">end</span>
<span class="r">return</span> <span class="pc">true</span> <span class="r">if</span> thread.stop?
<span class="r">end</span>
<span class="r">ensure</span>
thread.join
<span class="r">end</span></span></notextile></pre></div>
And finally, the test:
<div class="CodeRay"><pre><notextile><span class="CodeRay"><span class="r">def</span> <span class="fu">test_numbers_fails_fast_when_result_size_limit_exceeded</span>
range_size = <span class="i">9999999</span>
result_size_limit = <span class="i">5</span>
generator = <span class="co">NumberGenerator</span>.new(<span class="s"><span class="dl">"</span><span class="k">1-</span><span class="il"><span class="idl">#{</span>range_size<span class="idl">}</span></span><span class="dl">"</span></span>, result_size_limit)
acceptable_amount_of_time = acceptable_timing(<span class="i">100</span>, result_size_limit)
assert_equal <span class="pc">true</span>, \
completes_within?(acceptable_amount_of_time) { generator.numbers }, \
<span class="s"><span class="dl">"</span><span class="k">Exceeded acceptable time to determine that range of </span><span class="il"><span class="idl">#{</span>range_size<span class="idl">}</span></span><span class="k"> </span><span class="dl">"</span></span> + \
<span class="s"><span class="dl">"</span><span class="k">numbers exceeds limit of </span><span class="il"><span class="idl">#{</span>result_size_limit<span class="idl">}</span></span><span class="dl">"</span></span>
<span class="r">end</span></span></notextile></pre></div></p>
<p>I considered using a range size smaller than 9999999 to avoid the threading and make the solution simpler. My reasoning for not doing that is, if I were to pick a smaller number, it would still have to be sufficiently larger than the range size I used to determine the acceptable amount of time for the method under test to return. The larger range size gives me confidence that a failed timing is not just because of a resource spike on the computer running the test, at least if the test is <i>supposed</i> to fail. If I have to pick a large number anyways, it's going to take the test longer to fail, thus violating the idea of fail-fast testing. Therefore, I might as well just abort the method as soon as I know it's going to take too long.</p>
<p>To further improve the reliability of this test, the <code>completes_within?</code> method could be called multiple times and, if a success is ever achieved, the test passes. However, this would make the test run longer, so the choice of whether to use it or not should depend on the variation in resource load that is expected amongst the computers that will be running the tests. If the tests are running on a dedicated machine, this technique probably wouldn't be needed.</p>
<p>In order to gain 100% confidence that there will be no false negatives in the test results, the structure of the code could be modified so that it can be determined whether the algorithm is considering the result limit <i>while</i> it generates the numbers, or <i>afterwards</i>, as in the case of the buggy version of the algorithm. The tradeoff here is that a certain amount of the algorithm logic must be externalized so that the necessary assertions can be set up in the test. This makes the algorithm itself less adaptable to change, as some changes could make the test fail inappropriately, since not only would the results be getting tested, but also the way in which the algorithm works.</p>
tag:ryankinderman.net,2005:Article/192006-05-13T17:46:00-07:002007-09-22T08:43:50-07:00ryanDefined Classifications for "Mock Objects"<p>Martin Fowler has a good article on his blog about the different kinds of "mock objects" used in unit testing. He uses Gerard Meszaros' word for this classification of object: <a href="http://martinfowler.com/bliki/TestDouble.html">Test Double</a>. If you've ever been in a discussion about unit tests, you know how easily misunderstandings can result from throwing around terms like "mock", "dummy, or "stub". It's a good idea to have a consistent definition for these things.</p>