Update shunit to current master

pull/254/head
Orsiris de Jong 2 years ago
parent ecc4c1d396
commit 59bd176e48

@ -0,0 +1,147 @@
Coding Standards
================
shFlags is more than just a simple 20 line shell script. It is a pretty
significant library of shell code that at first glance is not that easy to
understand. To improve code readability and usability, some guidelines have been
set down to make the code more understandable for anyone who wants to read or
modify it.
Function declaration
--------------------
Declare functions using the following form:
```sh
doSomething() {
echo 'done!'
}
```
One-line functions are allowed if they can fit within the 80 char line limit.
```sh
doSomething() { echo 'done!'; }
```
Function documentation
----------------------
Each function should be preceded by a header that provides the following:
1. A one-sentence summary of what the function does.
1. (optional) A longer description of what the function does, and perhaps some
special information that helps convey its usage better.
1. Args: a one-line summary of each argument of the form:
`name: type: description`
1. Output: a one-line summary of the output provided. Only output to STDOUT
must be documented, unless the output to STDERR is of significance (i.e. not
just an error message). The output should be of the form:
`type: description`
1. Returns: a one-line summary of the value returned. Returns in shell are
always integers, but if the output is a true/false for success (i.e. a
boolean), it should be noted. The output should be of the form:
`type: description`
Here is a sample header:
```
# Return valid getopt options using currently defined list of long options.
#
# This function builds a proper getopt option string for short (and long)
# options, using the current list of long options for reference.
#
# Args:
# _flags_optStr: integer: option string type (__FLAGS_OPTSTR_*)
# Output:
# string: generated option string for getopt
# Returns:
# boolean: success of operation (always returns True)
```
Variable and function names
---------------------------
All shFlags specific constants, variables, and functions will be prefixed
appropriately with 'flags'. This is to distinguish usage in the shFlags code
from users own scripts so that the shell name space remains predictable to
users. The exceptions here are the standard `assertEquals`, etc. functions.
All non built-in constants and variables will be surrounded with squiggle
brackets, e.g. `${flags_someVariable}` to improve code readability.
Due to some shells not supporting local variables in functions, care in the
naming and use of variables, both public and private, is very important.
Accidental overriding of the variables can occur easily if care is not taken as
all variables are technically global variables in some shells.
Type | Sample
---- | ------
global public constant | `FLAGS_TRUE`
global private constant | `__FLAGS_SHELL_FLAGS`
global public variable | `flags_variable`
global private variable | `__flags_variable`
global macro | `_FLAGS_SOME_MACRO_`
public function | `flags_function`
public function, local variable | ``flags_variable_`
private function | `_flags_function`
private function, local variable | `_flags_variable_`
Where it makes sense to improve readability, variables can have the first
letter of the second and later words capitalized. For example, the local
variable name for the help string length is `flags_helpStrLen_`.
There are three special-case global public variables used. They are used due to
overcome the limitations of shell scoping or to prevent forking. The three
variables are:
- `flags_error`
- `flags_output`
- `flags_return`
Local variable cleanup
----------------------
As many shells do not support local variables, no support for cleanup of
variables is present either. As such, all variables local to a function must be
cleared up with the `unset` built-in command at the end of each function.
Indentation
-----------
Code block indentation is two (2) spaces, and tabs may not be used.
```sh
if [ -z 'some string' ]; then
someFunction
fi
```
Lines of code should be no longer than 80 characters unless absolutely
necessary. When lines are wrapped using the backslash character '\', subsequent
lines should be indented with four (4) spaces so as to differentiate from the
standard spacing of two characters, and tabs may not be used.
```sh
for x in some set of very long set of arguments that make for a very long \
that extends much too long for one line
do
echo ${x}
done
```
When a conditional expression is written using the built-in [ command, and that
line must be wrapped, place the control || or && operators on the same line as
the expression where possible, with the list to be executed on its own line.
```sh
[ -n 'some really long expression' -a -n 'some other long expr' ] && \
echo 'that was actually true!'
```

@ -1,10 +1,15 @@
# shUnit2
shUnit2 is a [xUnit](http://en.wikipedia.org/wiki/XUnit) unit test framework for Bourne based shell scripts, and it is designed to work in a similar manner to [JUnit](http://www.junit.org), [PyUnit](http://pyunit.sourceforge.net), etc.. If you have ever had the desire to write a unit test for a shell script, shUnit2 can do the job.
shUnit2 is a [xUnit](http://en.wikipedia.org/wiki/XUnit) unit test framework for
Bourne based shell scripts, and it is designed to work in a similar manner to
[JUnit](http://www.junit.org), [PyUnit](http://pyunit.sourceforge.net), etc.. If
you have ever had the desire to write a unit test for a shell script, shUnit2
can do the job.
[![Travis CI](https://img.shields.io/travis/kward/shunit2.svg)](https://travis-ci.org/kward/shunit2)
[![Travis CI](https://api.travis-ci.com/kward/shunit2.svg)](https://app.travis-ci.com/github/kward/shunit2)
## Table of Contents
* [Introduction](#introduction)
* [Credits / Contributors](#credits-contributors)
* [Feedback](#feedback)
@ -21,47 +26,76 @@ shUnit2 is a [xUnit](http://en.wikipedia.org/wiki/XUnit) unit test framework for
* [Error Handling](#error-handling)
* [Including Line Numbers in Asserts (Macros)](#including-line-numbers-in-asserts-macros)
* [Test Skipping](#test-skipping)
* [Running specific tests from the command line](#cmd-line-args)
* [Appendix](#appendix)
* [Getting help](#getting-help)
* [Zsh](#zsh)
---
## <a name="introduction"></a> Introduction
shUnit2 was originally developed to provide a consistent testing solution for [log4sh][log4sh], a shell based logging framework similar to [log4j](http://logging.apache.org). During the development of that product, a repeated problem of having things work just fine under one shell (`/bin/bash` on Linux to be specific), and then not working under another shell (`/bin/sh` on Solaris) kept coming up. Although several simple tests were run, they were not adequate and did not catch some corner cases. The decision was finally made to write a proper unit test framework after multiple brown-bag releases were made. _Research was done to look for an existing product that met the testing requirements, but no adequate product was found._
Tested Operating Systems (varies over time)
shUnit2 was originally developed to provide a consistent testing solution for
[log4sh][log4sh], a shell based logging framework similar to
[log4j](http://logging.apache.org). During the development of that product, a
repeated problem of having things work just fine under one shell (`/bin/bash` on
Linux to be specific), and then not working under another shell (`/bin/sh` on
Solaris) kept coming up. Although several simple tests were run, they were not
adequate and did not catch some corner cases. The decision was finally made to
write a proper unit test framework after multiple brown-bag releases were made.
_Research was done to look for an existing product that met the testing
requirements, but no adequate product was found._
### Tested software
* Cygwin
* FreeBSD (user supported)
* Linux (Gentoo, Ubuntu)
* Mac OS X
* Solaris 8, 9, 10 (inc. OpenSolaris)
**Tested Operating Systems** (varies over time)
Tested Shells
OS | Support | Verified
----------------------------------- | --------- | --------
Ubuntu Linux (14.04.05 LTS) | Travis CI | continuous
macOS High Sierra (10.13.3) | Travis CI | continuous
FreeBSD | user | unknown
Solaris 8, 9, 10 (inc. OpenSolaris) | user | unknown
Cygwin | user | unknown
**Tested Shells**
* Bourne Shell (__sh__)
* BASH - GNU Bourne Again SHell (__bash__)
* DASH (__dash__)
* Korn Shell (__ksh__)
* pdksh - Public Domain Korn Shell (__pdksh__)
* DASH - Debian Almquist Shell (__dash__)
* Korn Shell - AT&T version of the Korn shell (__ksh__)
* mksh - MirBSD Korn Shell (__mksh__)
* zsh - Zsh (__zsh__) (since 2.1.2) _please see the Zsh shell errata for more information_
See the appropriate Release Notes for this release (`doc/RELEASE_NOTES-X.X.X.txt`) for the list of actual versions tested.
See the appropriate Release Notes for this release
(`doc/RELEASE_NOTES-X.X.X.txt`) for the list of actual versions tested.
### <a name="credits-contributors"></a> Credits / Contributors
A list of contributors to shUnit2 can be found in `doc/contributors.md`. Many thanks go out to all those who have contributed to make this a better tool.
shUnit2 is the original product of many hours of work by Kate Ward, the primary author of the code. For related software, check out https://github.com/kward.
A list of contributors to shUnit2 can be found in `doc/contributors.md`. Many
thanks go out to all those who have contributed to make this a better tool.
shUnit2 is the original product of many hours of work by Kate Ward, the primary
author of the code. For related software, check out https://github.com/kward.
### <a name="feedback"></a> Feedback
Feedback is most certainly welcome for this document. Send your additions, comments and criticisms to the shunit2-users@google.com mailing list.
Feedback is most certainly welcome for this document. Send your questions,
comments, and criticisms via the
[shunit2-users](https://groups.google.com/a/forestent.com/forum/#!forum/shunit2-users/new)
forum (created 2018-12-09), or file an issue via
https://github.com/kward/shunit2/issues.
---
## <a name="quickstart"></a> Quickstart
This section will give a very quick start to running unit tests with shUnit2. More information is located in later sections.
Here is a quick sample script to show how easy it is to write a unit test in shell. _Note: the script as it stands expects that you are running it from the "examples" directory._
This section will give a very quick start to running unit tests with shUnit2.
More information is located in later sections.
Here is a quick sample script to show how easy it is to write a unit test in
shell. _Note: the script as it stands expects that you are running it from the
"examples" directory._
```sh
#! /bin/sh
@ -72,7 +106,7 @@ testEquality() {
}
# Load shUnit2.
. ./shunit2
. ../shunit2
```
Running the unit test should give results similar to the following.
@ -87,14 +121,38 @@ Ran 1 test.
OK
```
W00t! You've just run your first successful unit test. So, what just happened? Quite a bit really, and it all happened simply by sourcing the `shunit2` library. The basic functionality for the script above goes like this:
* When shUnit2 is sourced, it will walk through any functions defined whose name starts with the string `test`, and add those to an internal list of tests to execute. Once a list of test functions to be run has been determined, shunit2 will go to work.
* Before any tests are executed, shUnit2 again looks for a function, this time one named `oneTimeSetUp()`. If it exists, it will be run. This function is normally used to setup the environment for all tests to be run. Things like creating directories for output or setting environment variables are good to place here. Just so you know, you can also declare a corresponding function named `oneTimeTearDown()` function that does the same thing, but once all the tests have been completed. It is good for removing temporary directories, etc.
* shUnit2 is now ready to run tests. Before doing so though, it again looks for another function that might be declared, one named `setUp()`. If the function exists, it will be run before each test. It is good for resetting the environment so that each test starts with a clean slate. **At this stage, the first test is finally run.** The success of the test is recorded for a report that will be generated later. After the test is run, shUnit2 looks for a final function that might be declared, one named `tearDown()`. If it exists, it will be run after each test. It is a good place for cleaning up after each test, maybe doing things like removing files that were created, or removing directories. This set of steps, `setUp() > test() > tearDown()`, is repeated for all of the available tests.
* Once all the work is done, shUnit2 will generate the nice report you saw above. A summary of all the successes and failures will be given so that you know how well your code is doing.
We should now try adding a test that fails. Change your unit test to look like this.
W00t! You've just run your first successful unit test. So, what just happened?
Quite a bit really, and it all happened simply by sourcing the `shunit2`
library. The basic functionality for the script above goes like this:
* When shUnit2 is sourced, it will walk through any functions defined whose name
starts with the string `test`, and add those to an internal list of tests to
execute. Once a list of test functions to be run has been determined, shunit2
will go to work.
* Before any tests are executed, shUnit2 again looks for a function, this time
one named `oneTimeSetUp()`. If it exists, it will be run. This function is
normally used to setup the environment for all tests to be run. Things like
creating directories for output or setting environment variables are good to
place here. Just so you know, you can also declare a corresponding function
named `oneTimeTearDown()` function that does the same thing, but once all the
tests have been completed. It is good for removing temporary directories, etc.
* shUnit2 is now ready to run tests. Before doing so though, it again looks for
another function that might be declared, one named `setUp()`. If the function
exists, it will be run before each test. It is good for resetting the
environment so that each test starts with a clean slate. **At this stage, the
first test is finally run.** The success of the test is recorded for a report
that will be generated later. After the test is run, shUnit2 looks for a final
function that might be declared, one named `tearDown()`. If it exists, it will
be run after each test. It is a good place for cleaning up after each test,
maybe doing things like removing files that were created, or removing
directories. This set of steps, `setUp() > test() > tearDown()`, is repeated
for all of the available tests.
* Once all the work is done, shUnit2 will generate the nice report you saw
above. A summary of all the successes and failures will be given so that you
know how well your code is doing.
We should now try adding a test that fails. Change your unit test to look like
this.
```sh
#! /bin/sh
@ -110,12 +168,30 @@ testPartyLikeItIs1999() {
}
# Load shUnit2.
. ./shunit2
. ../shunit2
```
So, what did you get? I guess it told you that this isn't 1999. Bummer, eh? Hopefully, you noticed a couple of things that were different about the second test. First, we added an optional message that the user will see if the assert fails. Second, we did comparisons of strings instead of integers as in the first test. It doesn't matter whether you are testing for equality of strings or integers. Both work equally well with shUnit2.
So, what did you get? I guess it told you that this isn't 1999. Bummer, eh?
Hopefully, you noticed a couple of things that were different about the second
test. First, we added an optional message that the user will see if the assert
fails. Second, we did comparisons of strings instead of integers as in the first
test. It doesn't matter whether you are testing for equality of strings or
integers. Both work equally well with shUnit2.
Hopefully, this is enough to get you started with unit testing. If you want a ton more examples, take a look at the tests provided with [log4sh][log4sh] or [shFlags][shflags]. Both provide excellent examples of more advanced usage. shUnit2 was after all written to meet the unit testing need that [log4sh][log4sh] had.
Hopefully, this is enough to get you started with unit testing. If you want a
ton more examples, take a look at the tests provided with [log4sh][log4sh] or
[shFlags][shflags]. Both provide excellent examples of more advanced usage.
shUnit2 was after all written to meet the unit testing need that
[log4sh][log4sh] had.
If you are using distribution packaged shUnit2 which is accessible from
`/usr/bin/shunit2` such as Debian, you can load shUnit2 without specifying its
path. So the last 2 lines in the above can be replaced by:
```sh
# Load shUnit2.
. shunit2
```
---
@ -123,139 +199,212 @@ Hopefully, this is enough to get you started with unit testing. If you want a to
### <a name="general-info"></a> General Info
Any string values passed should be properly quoted -- they should must be surrounded by single-quote (`'`) or double-quote (`"`) characters -- so that the shell will properly parse them.
Any string values passed should be properly quoted -- they should be
surrounded by single-quote (`'`) or double-quote (`"`) characters -- so that the
shell will properly parse them.
### <a name="asserts"></a> Asserts
`assertEquals [message] expected actual`
assertEquals [message] expected actual
Asserts that _expected_ and _actual_ are equal to one another. The _expected_ and _actual_ values can be either strings or integer values as both will be treated as strings. The _message_ is optional, and must be quoted.
Asserts that _expected_ and _actual_ are equal to one another. The _expected_
and _actual_ values can be either strings or integer values as both will be
treated as strings. The _message_ is optional, and must be quoted.
`assertNotEquals [message] unexpected actual`
assertNotEquals [message] unexpected actual
Asserts that _unexpected_ and _actual_ are not equal to one another. The _unexpected_ and _actual_ values can be either strings or integer values as both will be treaded as strings. The _message_ is optional, and must be quoted.
Asserts that _unexpected_ and _actual_ are not equal to one another. The
_unexpected_ and _actual_ values can be either strings or integer values as both
will be treated as strings. The _message_ is optional, and must be quoted.
`assertSame [message] expected actual`
assertSame [message] expected actual
This function is functionally equivalent to `assertEquals`.
`assertNotSame [message] unexpected actual`
assertNotSame [message] unexpected actual
This function is functionally equivalent to `assertNotEquals`.
`assertNull [message] value`
assertContains [message] container content
Asserts that _container_ contains _content_. The _container_ and _content_
values can be either strings or integer values as both will be treated as
strings. The _message_ is optional, and must be quoted.
assertNotContains [message] container content
Asserts that _container_ does not contain _content_. The _container_ and
_content_ values can be either strings or integer values as both will be treated
as strings. The _message_ is optional, and must be quoted.
assertNull [message] value
Asserts that _value_ is _null_, or in shell terms, a zero-length string. The _value_ must be a string as an integer value does not translate into a zero-length string. The _message_ is optional, and must be quoted.
Asserts that _value_ is _null_, or in shell terms, a zero-length string. The
_value_ must be a string as an integer value does not translate into a zero-
length string. The _message_ is optional, and must be quoted.
`assertNotNull [message] value`
assertNotNull [message] value
Asserts that _value_ is _not null_, or in shell terms, a non-empty string. The _value_ may be a string or an integer as the later will be parsed as a non-empty string value. The _message_ is optional, and must be quoted.
Asserts that _value_ is _not null_, or in shell terms, a non-empty string. The
_value_ may be a string or an integer as the latter will be parsed as a non-empty
string value. The _message_ is optional, and must be quoted.
`assertTrue [message] condition`
assertTrue [message] condition
Asserts that a given shell test _condition_ is _true_. The condition can be as simple as a shell _true_ value (the value `0` -- equivalent to `${SHUNIT_TRUE}`), or a more sophisticated shell conditional expression. The _message_ is optional, and must be quoted.
Asserts that a given shell test _condition_ is _true_. The condition can be as
simple as a shell _true_ value (the value `0` -- equivalent to
`${SHUNIT_TRUE}`), or a more sophisticated shell conditional expression. The
_message_ is optional, and must be quoted.
A sophisticated shell conditional expression is equivalent to what the __if__ or __while__ shell built-ins would use (more specifically, what the __test__ command would use). Testing for example whether some value is greater than another value can be done this way.
A sophisticated shell conditional expression is equivalent to what the __if__ or
__while__ shell built-ins would use (more specifically, what the __test__
command would use). Testing for example whether some value is greater than
another value can be done this way.
`assertTrue "[ 34 -gt 23 ]"`
assertTrue "[ 34 -gt 23 ]"
Testing for the ability to read a file can also be done. This particular test will fail.
Testing for the ability to read a file can also be done. This particular test
will fail.
`assertTrue 'test failed' "[ -r /some/non-existant/file' ]"`
assertTrue 'test failed' "[ -r /some/non-existant/file ]"
As the expressions are standard shell __test__ expressions, it is possible to string multiple expressions together with `-a` and `-o` in the standard fashion. This test will succeed as the entire expression evaluates to _true_.
As the expressions are standard shell __test__ expressions, it is possible to
string multiple expressions together with `-a` and `-o` in the standard fashion.
This test will succeed as the entire expression evaluates to _true_.
`assertTrue 'test failed' '[ 1 -eq 1 -a 2 -eq 2 ]'`
assertTrue 'test failed' '[ 1 -eq 1 -a 2 -eq 2 ]'
_One word of warning: be very careful with your quoting as shell is not the most forgiving of bad quoting, and things will fail in strange ways._
<i>One word of warning: be very careful with your quoting as shell is not the
most forgiving of bad quoting, and things will fail in strange ways.</i>
`assertFalse [message] condition`
assertFalse [message] condition
Asserts that a given shell test _condition_ is _false_. The condition can be as simple as a shell _false_ value (the value `1` -- equivalent to `${SHUNIT_FALSE}`), or a more sophisticated shell conditional expression. The _message_ is optional, and must be quoted.
Asserts that a given shell test _condition_ is _false_. The condition can be as
simple as a shell _false_ value (the value `1` -- equivalent to
`${SHUNIT_FALSE}`), or a more sophisticated shell conditional expression. The
_message_ is optional, and must be quoted.
_For examples of more sophisticated expressions, see `assertTrue`._
### <a name="failures"></a> Failures
Just to clarify, failures __do not__ test the various arguments against one another. Failures simply fail, optionally with a message, and that is all they do. If you need to test arguments against one another, use asserts.
Just to clarify, failures __do not__ test the various arguments against one
another. Failures simply fail, optionally with a message, and that is all they
do. If you need to test arguments against one another, use asserts.
If all failures do is fail, why might one use them? There are times when you may have some very complicated logic that you need to test, and the simple asserts provided are simply not adequate. You can do your own validation of the code, use an `assertTrue ${SHUNIT_TRUE}` if your own tests succeeded, and use a failure to record a failure.
If all failures do is fail, why might one use them? There are times when you may
have some very complicated logic that you need to test, and the simple asserts
provided are simply not adequate. You can do your own validation of the code,
use an `assertTrue ${SHUNIT_TRUE}` if your own tests succeeded, and use a
failure to record a failure.
`fail [message]`
fail [message]
Fails the test immediately. The _message_ is optional, and must be quoted.
`failNotEquals [message] unexpected actual`
failNotEquals [message] unexpected actual
Fails the test immediately, reporting that the _unexpected_ and _actual_ values are not equal to one another. The _message_ is optional, and must be quoted.
Fails the test immediately, reporting that the _unexpected_ and _actual_ values
are not equal to one another. The _message_ is optional, and must be quoted.
_Note: no actual comparison of unexpected and actual is done._
`failSame [message] expected actual`
failSame [message] expected actual
Fails the test immediately, reporting that the _expected_ and _actual_ values are the same. The _message_ is optional, and must be quoted.
Fails the test immediately, reporting that the _expected_ and _actual_ values
are the same. The _message_ is optional, and must be quoted.
_Note: no actual comparison of expected and actual is done._
`failNotSame [message] expected actual`
failNotSame [message] expected actual
Fails the test immediately, reporting that the _expected_ and _actual_ values are not the same. The _message_ is optional, and must be quoted.
Fails the test immediately, reporting that the _expected_ and _actual_ values
are not the same. The _message_ is optional, and must be quoted.
_Note: no actual comparison of expected and actual is done._
failFound [message] content
Fails the test immediately, reporting that the _content_ was found. The
_message_ is optional, and must be quoted.
_Note: no actual search of content is done._
failNotFound [message] content
Fails the test immediately, reporting that the _content_ was not found. The
_message_ is optional, and must be quoted.
_Note: no actual search of content is done._
### <a name="setup-teardown"></a> Setup/Teardown
`oneTimeSetUp`
oneTimeSetUp
This function can be be optionally overridden by the user in their test suite.
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called once before any tests are run. It is useful to prepare a common environment for all tests.
If this function exists, it will be called once before any tests are run. It is
useful to prepare a common environment for all tests.
`oneTimeTearDown`
oneTimeTearDown
This function can be be optionally overridden by the user in their test suite.
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called once after all tests are completed. It is useful to clean up the environment after all tests.
If this function exists, it will be called once after all tests are completed.
It is useful to clean up the environment after all tests.
`setUp`
setUp
This function can be be optionally overridden by the user in their test suite.
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called before each test is run. It is useful to reset the environment before each test.
If this function exists, it will be called before each test is run. It is useful
to reset the environment before each test.
`tearDown`
tearDown
This function can be be optionally overridden by the user in their test suite.
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called after each test completes. It is useful to clean up the environment after each test.
If this function exists, it will be called after each test completes. It is
useful to clean up the environment after each test.
### <a name="skipping"></a> Skipping
`startSkipping`
startSkipping
This function forces the remaining _assert_ and _fail_ functions to be "skipped", i.e. they will have no effect. Each function skipped will be recorded so that the total of asserts and fails will not be altered.
This function forces the remaining _assert_ and _fail_ functions to be
"skipped", i.e. they will have no effect. Each function skipped will be recorded
so that the total of asserts and fails will not be altered.
`endSkipping`
endSkipping
This function returns calls to the _assert_ and _fail_ functions to their default behavior, i.e. they will be called.
This function returns calls to the _assert_ and _fail_ functions to their
default behavior, i.e. they will be called.
`isSkipping`
isSkipping
This function returns the current state of skipping. It can be compared against `${SHUNIT_TRUE}` or `${SHUNIT_FALSE}` if desired.
This function returns the current state of skipping. It can be compared against
`${SHUNIT_TRUE}` or `${SHUNIT_FALSE}` if desired.
### <a name="suites"></a> Suites
The default behavior of shUnit2 is that all tests will be found dynamically. If you have a specific set of tests you want to run, or you don't want to use the standard naming scheme of prefixing your tests with `test`, these functions are for you. Most users will never use them though.
The default behavior of shUnit2 is that all tests will be found dynamically. If
you have a specific set of tests you want to run, or you don't want to use the
standard naming scheme of prefixing your tests with `test`, these functions are
for you. Most users will never use them though.
`suite`
suite
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called when `shunit2` is sourced. If it does not exist, shUnit2 will search the parent script for all functions beginning with the word `test`, and they will be added dynamically to the test suite.
If this function exists, it will be called when `shunit2` is sourced. If it does
not exist, shUnit2 will search the parent script for all functions beginning
with the word `test`, and they will be added dynamically to the test suite.
`suite_addTest name`
suite_addTest name
This function adds a function named _name_ to the list of tests scheduled for execution as part of this test suite. This function should only be called from within the `suite()` function.
This function adds a function named _name_ to the list of tests scheduled for
execution as part of this test suite. This function should only be called from
within the `suite()` function.
---
@ -263,7 +412,8 @@ This function adds a function named _name_ to the list of tests scheduled for ex
### <a name="some-constants-you-can-use"></a> Some constants you can use
There are several constants provided by shUnit2 as variables that might be of use to you.
There are several constants provided by shUnit2 as variables that might be of
use to you.
*Predefined*
@ -280,22 +430,32 @@ There are several constants provided by shUnit2 as variables that might be of us
| Constant | Value |
| ----------------- | ----- |
| SHUNIT\_CMD\_EXPR | Override which `expr` command is used. By default `expr` is used, except on BSD systems where `gexpr` is used. |
| SHUNIT\_COLOR | Enable colorized output. Options are 'auto', 'always', or 'never', with 'auto' being the default. |
| SHUNIT\_COLOR | Enable colorized output. Options are 'auto', 'always', or 'none', with 'auto' being the default. |
| SHUNIT\_PARENT | The filename of the shell script containing the tests. This is needed specifically for Zsh support. |
| SHUNIT\_TEST\_PREFIX | Define this variable to add a prefix in front of each test name that is output in the test report. |
### <a name="error-handling"></a> Error handling
The constants values `SHUNIT_TRUE`, `SHUNIT_FALSE`, and `SHUNIT_ERROR` are returned from nearly every function to indicate the success or failure of the function. Additionally the variable `flags_error` is filled with a detailed error message if any function returns with a `SHUNIT_ERROR` value.
The constants values `SHUNIT_TRUE`, `SHUNIT_FALSE`, and `SHUNIT_ERROR` are
returned from nearly every function to indicate the success or failure of the
function. Additionally the variable `flags_error` is filled with a detailed
error message if any function returns with a `SHUNIT_ERROR` value.
### <a name="including-line-numbers-in-asserts-macros"></a> Including Line Numbers in Asserts (Macros)
If you include lots of assert statements in an individual test function, it can become difficult to determine exactly which assert was thrown unless your messages are unique. To help somewhat, line numbers can be included in the assert messages. To enable this, a special shell "macro" must be used rather than the standard assert calls. _Shell doesn't actually have macros; the name is used here as the operation is similar to a standard macro._
If you include lots of assert statements in an individual test function, it can
become difficult to determine exactly which assert was thrown unless your
messages are unique. To help somewhat, line numbers can be included in the
assert messages. To enable this, a special shell "macro" must be used rather
than the standard assert calls. _Shell doesn't actually have macros; the name is
used here as the operation is similar to a standard macro._
For example, to include line numbers for a `assertEquals()` function call, replace the `assertEquals()` with `${_ASSERT_EQUALS_}`.
For example, to include line numbers for a `assertEquals()` function call,
replace the `assertEquals()` with `${_ASSERT_EQUALS_}`.
_**Example** -- Asserts with and without line numbers_
```sh
```shell
#! /bin/sh
# file: examples/lineno_test.sh
@ -309,20 +469,36 @@ testLineNo() {
}
# Load shUnit2.
. ./shunit2
. ../shunit2
```
Notes:
1. Due to how shell parses command-line arguments, all strings used with macros should be quoted twice. Namely, single-quotes must be converted to single-double-quotes, and vice-versa. If the string being passed is absolutely for sure not empty, the extra quoting is not necessary.<br/><br/>Normal `assertEquals` call.<br/>`assertEquals 'some message' 'x' ''`<br/><br/>Macro `_ASSERT_EQUALS_` call. Note the extra quoting around the _message_ and the _null_ value.<br/>`_ASSERT_EQUALS_ '"some message"' 'x' '""'`
1. Due to how shell parses command-line arguments, _**all strings used with
macros should be quoted twice**_. Namely, single-quotes must be converted to single-double-quotes, and vice-versa.<br/>
<br/>
Normal `assertEquals` call.<br/>
`assertEquals 'some message' 'x' ''`<br/>
<br/>
Macro `_ASSERT_EQUALS_` call. Note the extra quoting around the _message_ and
the _null_ value.<br/>
`_ASSERT_EQUALS_ '"some message"' 'x' '""'`
1. Line numbers are not supported in all shells. If a shell does not support them, no errors will be thrown. Supported shells include: __bash__ (>=3.0), __ksh__, __pdksh__, and __zsh__.
1. Line numbers are not supported in all shells. If a shell does not support
them, no errors will be thrown. Supported shells include: __bash__ (>=3.0),
__ksh__, __mksh__, and __zsh__.
### <a name="test-skipping"></a> Test Skipping
There are times where the test code you have written is just not applicable to the system you are running on. This section describes how to skip these tests but maintain the total test count.
There are times where the test code you have written is just not applicable to
the system you are running on. This section describes how to skip these tests
but maintain the total test count.
Probably the easiest example would be shell code that is meant to run under the __bash__ shell, but the unit test is running under the Bourne shell. There are things that just won't work. The following test code demonstrates two sample functions, one that will be run under any shell, and the another that will run only under the __bash__ shell.
Probably the easiest example would be shell code that is meant to run under the
__bash__ shell, but the unit test is running under the Bourne shell. There are
things that just won't work. The following test code demonstrates two sample
functions, one that will be run under any shell, and the another that will run
only under the __bash__ shell.
_**Example** -- math include_
```sh
@ -371,10 +547,11 @@ oneTimeSetUp() {
}
# Load and run shUnit2.
. ./shunit2
. ../shunit2
```
Running the above test under the __bash__ shell will result in the following output.
Running the above test under the __bash__ shell will result in the following
output.
```console
$ /bin/bash math_test.sh
@ -385,7 +562,8 @@ Ran 1 test.
OK
```
But, running the test under any other Unix shell will result in the following output.
But, running the test under any other Unix shell will result in the following
output.
```console
$ /bin/ksh math_test.sh
@ -396,9 +574,33 @@ Ran 1 test.
OK (skipped=1)
```
As you can see, the total number of tests has not changed, but the report indicates that some tests were skipped.
As you can see, the total number of tests has not changed, but the report
indicates that some tests were skipped.
Skipping can be controlled with the following functions: `startSkipping()`,
`endSkipping()`, and `isSkipping()`. Once skipping is enabled, it will remain
enabled until the end of the current test function call, after which skipping is
disabled.
### <a name="cmd-line-args"></a> Running specific tests from the command line.
When running a test script, you may override the default set of tests, or the suite-specified set of tests, by providing additional arguments on the command line. Each additional argument after the `--` marker is assumed to be the name of a test function to be run in the order specified. e.g.
```console
test-script.sh -- testOne testTwo otherFunction
```
or
```console
shunit2 test-script.sh testOne testTwo otherFunction
```
In either case, three functions will be run as tests, `testOne`, `testTwo`, and `otherFunction`. Note that the function `otherFunction` would not normally be run by `shunit2` as part of the implicit collection of tests as it's function name does not match the test function name pattern `test*`.
If a specified test function does not exist, `shunit2` will still attempt to run that function and thereby cause a failure which `shunit2` will catch and mark as a failed test. All other tests will run normally.
Skipping can be controlled with the following functions: `startSkipping()`, `endSkipping()`, and `isSkipping()`. Once skipping is enabled, it will remain enabled until the end of the current test function call, after which skipping is disabled.
The specification of tests does not affect how `shunit2` looks for and executes the setup and tear down functions, which will still run as expected.
---
@ -406,29 +608,36 @@ Skipping can be controlled with the following functions: `startSkipping()`, `end
### <a name="getting-help"></a> Getting Help
For help, please send requests to either the shunit2-users@googlegroups.com mailing list (archives available on the web at http://groups.google.com/group/shunit2-users) or directly to Kate Ward <kate dot ward at forestent dot com>.
For help, please send requests to either the shunit2-users@forestent.com mailing
list (archives available on the web at
https://groups.google.com/a/forestent.com/forum/#!forum/shunit2-users) or
directly to Kate Ward <kate dot ward at forestent dot com>.
### <a name="zsh"></a> Zsh
For compatibility with Zsh, there is one requirement that must be met -- the `shwordsplit` option must be set. There are three ways to accomplish this.
For compatibility with Zsh, there is one requirement that must be met -- the
`shwordsplit` option must be set. There are three ways to accomplish this.
1. In the unit-test script, add the following shell code snippet before sourcing the `shunit2` library.
1. In the unit-test script, add the following shell code snippet before sourcing
the `shunit2` library.
```sh
setopt shwordsplit
```
```sh
setopt shwordsplit
```
1. When invoking __zsh__ from either the command-line or as a script with `#!`, add the `-y` parameter.
2. When invoking __zsh__ from either the command-line or as a script with `#!`,
add the `-y` parameter.
```sh
#! /bin/zsh -y
```
```sh
#! /bin/zsh -y
```
1. When invoking __zsh__ from the command-line, add `-o shwordsplit --` as parameters before the script name.
3. When invoking __zsh__ from the command-line, add `-o shwordsplit --` as
parameters before the script name.
```console
$ zsh -o shwordsplit -- some_script
```
```console
$ zsh -o shwordsplit -- some_script
```
[log4sh]: https://github.com/kward/log4sh
[shflags]: https://github.com/kward/shflags

@ -0,0 +1,47 @@
#! /bin/sh
#
# Initialize the local git hooks this repository.
# https://git-scm.com/docs/githooks
topLevel=$(git rev-parse --show-toplevel)
if ! cd "${topLevel}"; then
echo "filed to cd into topLevel directory '${topLevel}'"
exit 1
fi
hooksDir="${topLevel}/.githooks"
if ! hooksPath=$(git config core.hooksPath); then
hooksPath="${topLevel}/.git/hooks"
fi
src="${hooksDir}/generic"
echo "linking hooks..."
for hook in \
applypatch-msg \
pre-applypatch \
post-applypatch \
pre-commit \
pre-merge-commit \
prepare-commit-msg \
commit-msg \
post-commit \
pre-rebase \
post-checkout \
post-merge \
pre-push \
pre-receive \
update \
post-receive \
post-update \
push-to-checkout \
pre-auto-gc \
post-rewrite \
sendemail-validate \
fsmonitor-watchman \
p4-pre-submit \
post-index-change
do
echo " ${hook}"
dest="${hooksPath}/${hook}"
ln -sf "${src}" "${dest}"
done

@ -3,7 +3,7 @@
#
# Versions determines the versions of all installed shells.
#
# Copyright 2008-2018 Kate Ward. All Rights Reserved.
# Copyright 2008-2020 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 License.
#
# Author: kate.ward@forestent.com (Kate Ward)
@ -18,7 +18,7 @@
ARGV0=`basename "$0"`
LSB_RELEASE='/etc/lsb-release'
VERSIONS_SHELLS='ash /bin/bash /bin/dash /bin/ksh /bin/pdksh /bin/zsh /bin/sh /usr/xpg4/bin/sh /sbin/sh'
VERSIONS_SHELLS='ash /bin/bash /bin/dash /bin/ksh /bin/mksh /bin/pdksh /bin/zsh /usr/xpg4/bin/sh /bin/sh /sbin/sh'
true; TRUE=$?
false; FALSE=$?
@ -49,6 +49,10 @@ versions_osName() {
10.11|10.11.[0-9]*) os_name_='Mac OS X El Capitan' ;;
10.12|10.12.[0-9]*) os_name_='macOS Sierra' ;;
10.13|10.13.[0-9]*) os_name_='macOS High Sierra' ;;
10.14|10.14.[0-9]*) os_name_='macOS Mojave' ;;
10.15|10.15.[0-9]*) os_name_='macOS Catalina' ;;
11.*) os_name_='macOS Big Sur' ;;
12.*) os_name_='macOS Monterey' ;;
*) os_name_='macOS' ;;
esac
;;
@ -133,10 +137,11 @@ versions_shellVersion() {
version_=''
case ${shell_} in
/sbin/sh) ;; # SunOS
/usr/xpg4/bin/sh)
version_=`versions_shell_xpg4 "${shell_}"`
;; # SunOS
# SunOS shells.
/sbin/sh) ;;
/usr/xpg4/bin/sh) version_=`versions_shell_xpg4 "${shell_}"` ;;
# Generic shell.
*/sh)
# This could be one of any number of shells. Try until one fits.
version_=''
@ -147,16 +152,22 @@ versions_shellVersion() {
[ -z "${version_}" ] && version_=`versions_shell_xpg4 "${shell_}"`
[ -z "${version_}" ] && version_=`versions_shell_zsh "${shell_}"`
;;
# Specific shells.
ash) version_=`versions_shell_ash "${shell_}"` ;;
# bash - Bourne Again SHell (https://www.gnu.org/software/bash/)
*/bash) version_=`versions_shell_bash "${shell_}"` ;;
*/dash)
# Assuming Ubuntu Linux until somebody comes up with a better test. The
# following test will return an empty string if dash is not installed.
version_=`versions_shell_dash`
;;
*/dash) version_=`versions_shell_dash` ;;
# ksh - KornShell (http://www.kornshell.com/)
*/ksh) version_=`versions_shell_ksh "${shell_}"` ;;
# mksh - MirBSD Korn Shell (http://www.mirbsd.org/mksh.htm)
*/mksh) version_=`versions_shell_ksh "${shell_}"` ;;
# pdksh - Public Domain Korn Shell (http://web.cs.mun.ca/~michael/pdksh/)
*/pdksh) version_=`versions_shell_pdksh "${shell_}"` ;;
# zsh (https://www.zsh.org/)
*/zsh) version_=`versions_shell_zsh "${shell_}"` ;;
# Unrecognized shell.
*) version_='invalid'
esac
@ -173,6 +184,8 @@ versions_shell_bash() {
$1 --version : 2>&1 |grep 'GNU bash' |sed 's/.*version \([^ ]*\).*/\1/'
}
# Assuming Ubuntu Linux until somebody comes up with a better test. The
# following test will return an empty string if dash is not installed.
versions_shell_dash() {
eval dpkg >/dev/null 2>&1
[ $? -eq 127 ] && return # Return if dpkg not found.
@ -193,6 +206,10 @@ versions_shell_ksh() {
else
versions_version_=''
fi
if [ -z "${versions_version_}" ]; then
# shellcheck disable=SC2016
versions_version_=`${versions_shell_} -c 'echo ${KSH_VERSION}'`
fi
if [ -z "${versions_version_}" ]; then
_versions_have_strings
versions_version_=`strings "${versions_shell_}" 2>&1 \
@ -207,6 +224,14 @@ versions_shell_ksh() {
unset versions_shell_ versions_version_
}
# mksh - MirBSD Korn Shell (http://www.mirbsd.org/mksh.htm)
# mksh is a successor to pdksh (Public Domain Korn Shell).
versions_shell_mksh() {
versions_shell_ksh
}
# pdksh - Public Domain Korn Shell
# pdksh is an obsolete shell, which was replaced by mksh (among others).
versions_shell_pdksh() {
_versions_have_strings
strings "$1" 2>&1 \

File diff suppressed because it is too large Load Diff

@ -0,0 +1,64 @@
#!/bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shunit2 unit test for running subset(s) of tests based upon command line args.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# https://github.com/kward/shunit2
#
# Also shows how non-default tests or a arbitrary subset of tests can be run.
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Load test helpers.
. ./shunit2_test_helpers
CUSTOM_TEST_RAN=''
# This test does not normally run because it does not begin "test*". Will be
# run by setting the arguments to the script to include the name of this test.
custom_test() {
# Arbitrary assert.
assertTrue 0
# The true intent is to set this variable, which will be tested below.
CUSTOM_TEST_RAN='yup, we ran'
}
# Verify that `customTest()` ran.
testCustomTestRan() {
assertNotNull "'custom_test()' did not run" "${CUSTOM_TEST_RAN}"
}
# Fail if this test runs, which is shouldn't if arguments are set correctly.
testShouldFail() {
fail 'testShouldFail should not be run if argument parsing works'
}
oneTimeSetUp() {
th_oneTimeSetUp
}
# If zero/one argument(s) are provided, this test is being run in it's
# entirety, and therefore we want to set the arguments to the script to
# (simulate and) test the processing of command-line specified tests. If we
# don't, then the "test_will_fail" test will run (by default) and the overall
# test will fail.
#
# However, if two or more arguments are provided, then assume this test script
# is being run by hand to experiment with command-line test specification, and
# then don't override the user provided arguments.
if [ "$#" -le 1 ]; then
# We set the arguments in a POSIX way, inasmuch as we can;
# helpful tip:
# https://unix.stackexchange.com/questions/258512/how-to-remove-a-positional-parameter-from
set -- '--' 'custom_test' 'testCustomTestRan'
fi
# Load and run shunit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT=$0
. "${TH_SHUNIT}"

@ -3,12 +3,16 @@
#
# shunit2 unit test for assert functions.
#
# Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
# In this file, all assert calls under test must be wrapped in () so they do not
# influence the metrics of the test itself.
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
@ -22,175 +26,377 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
commonEqualsSame() {
fn=$1
( ${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'equal' $? "${stdoutF}" "${stderrF}"
( ${fn} "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'equal; with msg' $? "${stdoutF}" "${stderrF}"
( ${fn} 'abc def' 'abc def' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'equal with spaces' $? "${stdoutF}" "${stderrF}"
( ${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'not equal' $? "${stdoutF}" "${stderrF}"
( ${fn} '' '' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'null values' $? "${stdoutF}" "${stderrF}"
( ${fn} arg1 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( ${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
# These should succeed.
desc='equal'
if (${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_with_message'
if (${fn} 'some message' 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_with_spaces'
if (${fn} 'abc def' 'abc def' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_null_values'
if (${fn} '' '' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
# These should fail.
desc='not_equal'
if (${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
}
commonNotEqualsSame() {
fn=$1
( ${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not same' $? "${stdoutF}" "${stderrF}"
( ${fn} "${MSG}" 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not same, with msg' $? "${stdoutF}" "${stderrF}"
( ${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
( ${fn} '' '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
( ${fn} arg1 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( ${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
}
testAssertEquals() {
commonEqualsSame 'assertEquals'
# These should succeed.
desc='not_same'
if (${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='not_same_with_message'
if (${fn} 'some message' 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
# These should fail.
desc='same'
if (${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
desc='unequal_null_values'
if (${fn} '' '' >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
}
testAssertNotEquals() {
commonNotEqualsSame 'assertNotEquals'
testAssertEquals() { commonEqualsSame 'assertEquals'; }
testAssertNotEquals() { commonNotEqualsSame 'assertNotEquals'; }
testAssertSame() { commonEqualsSame 'assertSame'; }
testAssertNotSame() { commonNotEqualsSame 'assertNotSame'; }
testAssertContains() {
# Content is present.
while read -r desc container content; do
if (assertContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
abc_at_start abcdef abc
bcd_in_middle abcdef bcd
def_at_end abcdef def
EOF
# Content missing.
while read -r desc container content; do
if (assertContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: unexpected failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
xyz_not_present abcdef xyz
zab_contains_start abcdef zab
efg_contains_end abcdef efg
acf_has_parts abcdef acf
EOF
desc="content_starts_with_dash"
if (assertContains 'abc -Xabc def' '-Xabc' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc="contains_with_message"
if (assertContains 'some message' 'abcdef' 'abc' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
}
testAssertSame() {
commonEqualsSame 'assertSame'
}
testAssertNotSame() {
commonNotEqualsSame 'assertNotSame'
testAssertNotContains() {
# Content not present.
while read -r desc container content; do
if (assertNotContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
xyz_not_present abcdef xyz
zab_contains_start abcdef zab
efg_contains_end abcdef efg
acf_has_parts abcdef acf
EOF
# Content present.
while read -r desc container content; do
if (assertNotContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
abc_is_present abcdef abc
EOF
desc='not_contains_with_message'
if (assertNotContains 'some message' 'abcdef' 'xyz' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
}
testAssertNull() {
( assertNull '' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'null' $? "${stdoutF}" "${stderrF}"
( assertNull "${MSG}" '' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'null, with msg' $? "${stdoutF}" "${stderrF}"
( assertNull 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'not null' $? "${stdoutF}" "${stderrF}"
( assertNull >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( assertNull arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
while read -r desc value; do
if (assertNull "${value}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: unexpected failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<'EOF'
x_alone x
x_double_quote_a x"a
x_single_quote_a x'a
x_dollar_a x$a
x_backtick_a x`a
EOF
desc='null_without_message'
if (assertNull '' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='null_with_message'
if (assertNull 'some message' '' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='x_is_not_null'
if (assertNull 'x' >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
}
testAssertNotNull()
{
( assertNotNull 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null' $? "${stdoutF}" "${stderrF}"
( assertNotNull "${MSG}" 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with msg' $? "${stdoutF}" "${stderrF}"
( assertNotNull 'x"b' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with double-quote' $? \
"${stdoutF}" "${stderrF}"
( assertNotNull "x'b" >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with single-quote' $? \
"${stdoutF}" "${stderrF}"
# shellcheck disable=SC2016
( assertNotNull 'x$b' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with dollar' $? \
"${stdoutF}" "${stderrF}"
( assertNotNull 'x`b' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with backtick' $? \
"${stdoutF}" "${stderrF}"
( assertNotNull '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null' $? "${stdoutF}" "${stderrF}"
# There is no test for too few arguments as $1 might actually be null.
( assertNotNull arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
testAssertNotNull() {
while read -r desc value; do
if (assertNotNull "${value}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<'EOF'
x_alone x
x_double_quote_b x"b
x_single_quote_b x'b
x_dollar_b x$b
x_backtick_b x`b
EOF
desc='not_null_with_message'
if (assertNotNull 'some message' 'x' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc="double_ticks_are_null"
if (assertNotNull '' >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
}
testAssertTrue() {
( assertTrue 0 >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'true' $? "${stdoutF}" "${stderrF}"
( assertTrue "${MSG}" 0 >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'true, with msg' $? "${stdoutF}" "${stderrF}"
( assertTrue '[ 0 -eq 0 ]' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'true condition' $? "${stdoutF}" "${stderrF}"
( assertTrue 1 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'false' $? "${stdoutF}" "${stderrF}"
( assertTrue '[ 0 -eq 1 ]' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'false condition' $? "${stdoutF}" "${stderrF}"
( assertTrue '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null' $? "${stdoutF}" "${stderrF}"
( assertTrue >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( assertTrue arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
# True values.
while read -r desc value; do
if (assertTrue "${value}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<'EOF'
zero 0
zero_eq_zero [ 0 -eq 0 ]
EOF
# Not true values.
while read -r desc value; do
if (assertTrue "${value}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
one 1
zero_eq_1 [ 0 -eq 1 ]
null
EOF
desc='true_with_message'
if (assertTrue 'some message' 0 >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
}
testAssertFalse() {
( assertFalse 1 >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'false' $? "${stdoutF}" "${stderrF}"
( assertFalse "${MSG}" 1 >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'false, with msg' $? "${stdoutF}" "${stderrF}"
( assertFalse '[ 0 -eq 1 ]' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'false condition' $? "${stdoutF}" "${stderrF}"
( assertFalse 0 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'true' $? "${stdoutF}" "${stderrF}"
( assertFalse '[ 0 -eq 0 ]' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'true condition' $? "${stdoutF}" "${stderrF}"
( assertFalse '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'true condition' $? "${stdoutF}" "${stderrF}"
# False values.
while read -r desc value; do
if (assertFalse "${value}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
one 1
zero_eq_1 [ 0 -eq 1 ]
null
EOF
# Not true values.
while read -r desc value; do
if (assertFalse "${value}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<'EOF'
zero 0
zero_eq_zero [ 0 -eq 0 ]
EOF
desc='false_with_message'
if (assertFalse 'some message' 1 >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
}
( assertFalse >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
FUNCTIONS='
assertEquals assertNotEquals
assertSame assertNotSame
assertContains assertNotContains
assertNull assertNotNull
assertTrue assertFalse
'
testTooFewArguments() {
for fn in ${FUNCTIONS}; do
# These functions support zero arguments.
case "${fn}" in
assertNull) continue ;;
assertNotNull) continue ;;
esac
desc="${fn}"
if (${fn} >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
( assertFalse arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
testTooManyArguments() {
for fn in ${FUNCTIONS}; do
desc="${fn}"
if (${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
oneTimeSetUp() {
th_oneTimeSetUp
MSG='This is a test message'
}
# showTestOutput for the most recently run test.
_showTestOutput() { th_showOutput "${SHUNIT_FALSE}" "${stdoutF}" "${stderrF}"; }
# Load and run shunit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT=$0

@ -1,10 +1,11 @@
#! /bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shUnit2 unit test for failure functions
# shUnit2 unit test for failure functions. These functions do not test values.
#
# Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Released under the LGPL (GNU Lesser General Public License)
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
@ -20,60 +21,114 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
. ./shunit2_test_helpers
testFail() {
( fail >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'fail' $? "${stdoutF}" "${stderrF}"
( fail "${MSG}" >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'fail with msg' $? "${stdoutF}" "${stderrF}"
( fail arg1 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'too many arguments' $? "${stdoutF}" "${stderrF}"
# Test without a message.
desc='fail_without_message'
if ( fail >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
# Test with a message.
desc='fail_with_message'
if ( fail 'some message' >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
}
testFailNotEquals() {
( failNotEquals 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
( failNotEquals "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'same with msg' $? "${stdoutF}" "${stderrF}"
( failNotEquals 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'not same' $? "${stdoutF}" "${stderrF}"
( failNotEquals '' '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
( failNotEquals >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( failNotEquals arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
# FN_TESTS hold all the functions to be tested.
# shellcheck disable=SC2006
FN_TESTS=`
# fn num_args pattern
cat <<EOF
fail 1
failNotEquals 3 but was:
failFound 2 found:
failNotFound 2 not found:
failSame 3 not same
failNotSame 3 but was:
EOF
`
testFailsWithArgs() {
echo "${FN_TESTS}" |\
while read -r fn num_args pattern; do
case "${fn}" in
fail) continue ;;
esac
# Test without a message.
desc="${fn}_without_message"
if ( ${fn} arg1 arg2 >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
# Test with a message.
arg1='' arg2=''
case ${num_args} in
1) ;;
2) arg1='arg1' ;;
3) arg1='arg1' arg2='arg2' ;;
esac
desc="${fn}_with_message"
if ( ${fn} 'some message' ${arg1} ${arg2} >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
if ! grep -- "${pattern}" "${stdoutF}" >/dev/null; then
fail "${desc}: incorrect message to STDOUT"
th_showOutput
fi
fi
done
}
testFailSame() {
( failSame 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
( failSame "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'same with msg' $? "${stdoutF}" "${stderrF}"
( failSame 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'not same' $? "${stdoutF}" "${stderrF}"
( failSame '' '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
( failSame >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
testTooFewArguments() {
echo "${FN_TESTS}" \
|while read -r fn num_args pattern; do
# Skip functions that support a single message argument.
if [ "${num_args}" -eq 1 ]; then
continue
fi
desc="${fn}"
if (${fn} >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
( failSame arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
testTooManyArguments() {
echo "${FN_TESTS}" \
|while read -r fn num_args pattern; do
desc="${fn}"
if (${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
oneTimeSetUp() {
th_oneTimeSetUp
MSG='This is a test message'
}
# Load and run shUnit2.

@ -0,0 +1,99 @@
#! /bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shUnit2 unit tests for general commands.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT"
stderrF="${TMPDIR:-/tmp}/STDERR"
# Load test helpers.
. ./shunit2_test_helpers
testSkipping() {
# We shouldn't be skipping to start.
if isSkipping; then
th_error 'skipping *should not be* enabled'
return
fi
startSkipping
was_skipping_started=${SHUNIT_FALSE}
if isSkipping; then was_skipping_started=${SHUNIT_TRUE}; fi
endSkipping
was_skipping_ended=${SHUNIT_FALSE}
if isSkipping; then was_skipping_ended=${SHUNIT_TRUE}; fi
assertEquals "skipping wasn't started" "${was_skipping_started}" "${SHUNIT_TRUE}"
assertNotEquals "skipping wasn't ended" "${was_skipping_ended}" "${SHUNIT_TRUE}"
return 0
}
testStartSkippingWithMessage() {
unittestF="${SHUNIT_TMPDIR}/unittest"
sed 's/^#//' >"${unittestF}" <<\EOF
## Start skipping with a message.
#testSkipping() {
# startSkipping 'SKIP-a-Dee-Doo-Dah'
#}
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
if ! grep '\[skipping\] SKIP-a-Dee-Doo-Dah' "${stderrF}" >/dev/null; then
fail 'skipping message was not generated'
fi
return 0
}
testStartSkippingWithoutMessage() {
unittestF="${SHUNIT_TMPDIR}/unittest"
sed 's/^#//' >"${unittestF}" <<\EOF
## Start skipping with a message.
#testSkipping() {
# startSkipping
#}
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
if grep '\[skipping\]' "${stderrF}" >/dev/null; then
fail 'skipping message was unexpectedly generated'
fi
return 0
}
setUp() {
for f in "${stdoutF}" "${stderrF}"; do
cp /dev/null "${f}"
done
# Reconfigure coloring as some tests override default behavior.
_shunit_configureColor "${SHUNIT_COLOR_DEFAULT}"
# shellcheck disable=SC2034,SC2153
SHUNIT_CMD_TPUT=${__SHUNIT_CMD_TPUT}
}
oneTimeSetUp() {
SHUNIT_COLOR_DEFAULT="${SHUNIT_COLOR}"
th_oneTimeSetUp
}
# Load and run shUnit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT=$0
. "${TH_SHUNIT}"

@ -3,17 +3,15 @@
#
# shunit2 unit test for macros.
#
# Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
### ShellCheck http://www.shellcheck.net/
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Presence of LINENO variable is checked.
# shellcheck disable=SC2039
# These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT"
@ -23,215 +21,223 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
. ./shunit2_test_helpers
testAssertEquals() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_EQUALS_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_EQUALS_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_EQUALS_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_EQUALS_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testAssertNotEquals() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_NOT_EQUALS_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_EQUALS_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_EQUALS_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_NOT_EQUALS_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_EQUALS_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_EQUALS_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testSame() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_SAME_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_SAME_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_SAME_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_SAME_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testNotSame() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_NOT_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_SAME_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_SAME_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_NOT_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_SAME_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_SAME_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testNull() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_NULL_} 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NULL_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NULL_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_NULL_} '"some msg"' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NULL_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NULL_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testNotNull()
{
# start skipping if LINENO not available
[ -z "${LINENO:-}" ] && startSkipping
testNotNull() {
isLinenoWorking || startSkipping
( ${_ASSERT_NOT_NULL_} '' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_NULL_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_NULL_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_NOT_NULL_} '"some msg"' '""' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_NOT_NULL_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stdoutF}" "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_NOT_NULL_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testAssertTrue() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_TRUE_} "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_TRUE_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_TRUE_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_TRUE_} '"some msg"' "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_TRUE_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_TRUE_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testAssertFalse() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_ASSERT_FALSE_} "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_FALSE_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_FALSE_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_ASSERT_FALSE_} '"some msg"' "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_ASSERT_FALSE_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_ASSERT_FALSE_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testFail() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_FAIL_} >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_FAIL_} '"some msg"' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testFailNotEquals()
{
# start skipping if LINENO not available
[ -z "${LINENO:-}" ] && startSkipping
testFailNotEquals() {
isLinenoWorking || startSkipping
( ${_FAIL_NOT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_NOT_EQUALS_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_NOT_EQUALS_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_FAIL_NOT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_NOT_EQUALS_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_NOT_EQUALS_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testFailSame() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_FAIL_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_SAME_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_SAME_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_FAIL_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_SAME_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_SAME_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
testFailNotSame() {
# Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
isLinenoWorking || startSkipping
( ${_FAIL_NOT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_NOT_SAME_ failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_NOT_SAME_ failed to produce an ASSERT message'
showTestOutput
fi
( ${_FAIL_NOT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
rtrn=$?
assertTrue '_FAIL_NOT_SAME_ w/ msg failure' ${rtrn}
[ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
if ! wasAssertGenerated; then
fail '_FAIL_NOT_SAME_ (with a message) failed to produce an ASSERT message'
showTestOutput
fi
}
oneTimeSetUp() {
th_oneTimeSetUp
if ! isLinenoWorking; then
# shellcheck disable=SC2016
th_warn '${LINENO} is not working for this shell. Tests will be skipped.'
fi
}
# isLinenoWorking returns true if the `$LINENO` shell variable works properly.
isLinenoWorking() {
# shellcheck disable=SC2016
ln='eval echo "${LINENO:-}"'
case ${ln} in
[0-9]*) return "${SHUNIT_TRUE}" ;;
-[0-9]*) return "${SHUNIT_FALSE}" ;; # The dash shell produces negative values.
esac
return "${SHUNIT_FALSE}"
}
# showTestOutput for the most recently run test.
showTestOutput() { th_showOutput "${SHUNIT_FALSE}" "${stdoutF}" "${stderrF}"; }
# wasAssertGenerated returns true if an ASSERT was generated to STDOUT.
wasAssertGenerated() { grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null; }
# Disable output coloring as it breaks the tests.
SHUNIT_COLOR='none'; export SHUNIT_COLOR

@ -3,19 +3,17 @@
#
# shUnit2 unit tests of miscellaneous things
#
# Copyright 2008-2018 Kate Ward. All Rights Reserved.
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
### ShellCheck http://www.shellcheck.net/
# $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006
# Allow usage of legacy backticked `...` notation instead of $(...).
# shellcheck disable=SC2006
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Not wanting to escape single quotes.
# shellcheck disable=SC1003
# These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT"
@ -41,14 +39,18 @@ testUnboundVariable() {
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
assertFalse 'expected a non-zero exit value' $?
grep '^ASSERT:Unknown failure' "${stdoutF}" >/dev/null
assertTrue 'assert message was not generated' $?
grep '^Ran [0-9]* test' "${stdoutF}" >/dev/null
assertTrue 'test count message was not generated' $?
grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue 'failure message was not generated' $?
if ( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ); then
fail 'expected a non-zero exit value'
fi
if ! grep '^ASSERT:unknown failure' "${stdoutF}" >/dev/null; then
fail 'assert message was not generated'
fi
if ! grep '^Ran [0-9]* test' "${stdoutF}" >/dev/null; then
fail 'test count message was not generated'
fi
if ! grep '^FAILED' "${stdoutF}" >/dev/null; then
fail 'failure message was not generated'
fi
}
# assertEquals repeats message argument.
@ -57,7 +59,8 @@ testIssue7() {
# Disable coloring so 'ASSERT:' lines can be matched correctly.
_shunit_configureColor 'none'
( assertEquals 'Some message.' 1 2 >"${stdoutF}" 2>"${stderrF}" )
# Ignoring errors with `|| :` as we only care about the message in this test.
( assertEquals 'Some message.' 1 2 >"${stdoutF}" 2>"${stderrF}" ) || :
diff "${stdoutF}" - >/dev/null <<EOF
ASSERT:Some message. expected:<1> but was:<2>
EOF
@ -77,19 +80,37 @@ testIssue29() {
#SHUNIT_TEST_PREFIX='--- '
#. ${TH_SHUNIT}
EOF
( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
grep '^--- test_assert' "${stdoutF}" >/dev/null
rtrn=$?
assertEquals "${SHUNIT_TRUE}" "${rtrn}"
[ "${rtrn}" -eq "${SHUNIT_TRUE}" ] || cat "${stdoutF}" >&2
}
# Test that certain external commands sometimes "stubbed" by users are escaped.
testIssue54() {
for c in mkdir rm cat chmod sed; do
if grep "^[^#]*${c} " "${TH_SHUNIT}" | grep -qv "command ${c}"; then
fail "external call to ${c} not protected somewhere"
fi
done
# shellcheck disable=2016
if grep '^[^#]*[^ ] *\[' "${TH_SHUNIT}" | grep -qv '${__SHUNIT_BUILTIN} \['; then
fail 'call to [ not protected somewhere'
fi
# shellcheck disable=2016
if grep '^[^#]* *\.' "${TH_SHUNIT}" | grep -qv '${__SHUNIT_BUILTIN} \.'; then
fail 'call to . not protected somewhere'
fi
}
# shUnit2 should not exit with 0 when it has syntax errors.
# https://github.com/kward/shunit2/issues/69
testIssue69() {
unittestF="${SHUNIT_TMPDIR}/unittest"
for t in Equals NotEquals Null NotNull Same NotSame True False; do
# Note: assertNull not tested as zero arguments == null, which is valid.
for t in Equals NotEquals NotNull Same NotSame True False; do
assert="assert${t}"
sed 's/^#//' >"${unittestF}" <<EOF
## Asserts with invalid argument counts should be counted as failures.
@ -97,7 +118,8 @@ testIssue69() {
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue "failure message for ${assert} was not generated" $?
done
@ -105,7 +127,7 @@ EOF
# Ensure that test fails if setup/teardown functions fail.
testIssue77() {
unittestF="${SHUNIT_TMPDIR}/unittest"
unittestF="${SHUNIT_TMPDIR}/unittest"
for func in oneTimeSetUp setUp tearDown oneTimeTearDown; do
sed 's/^#//' >"${unittestF}" <<EOF
## Environment failure should end test.
@ -114,7 +136,8 @@ testIssue77() {
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" ) >"${stdoutF}" 2>"${stderrF}" || :
grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue "failure of ${func}() did not end test" $?
done
@ -135,9 +158,24 @@ testIssue84() {
#SHUNIT_TEST_PREFIX='--- '
#. ${TH_SHUNIT}
EOF
( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue "failure message for ${assert} was not generated" $?
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
if ! grep '^FAILED' "${stdoutF}" >/dev/null; then
fail 'failure message was not generated'
fi
}
# Demonstrate that asserts are no longer executed in subshells.
# https://github.com/kward/shunit2/issues/123
#
# NOTE: this test only works if the `${BASH_SUBSHELL}` variable is present.
testIssue123() {
if [ -z "${BASH_SUBSHELL:-}" ]; then
# shellcheck disable=SC2016
startSkipping 'The ${BASH_SUBSHELL} variable is unavailable in this shell.'
fi
# shellcheck disable=SC2016
assertTrue 'not in subshell' '[[ ${BASH_SUBSHELL} -eq 0 ]]'
}
testPrepForSourcing() {
@ -146,55 +184,6 @@ testPrepForSourcing() {
assertEquals './abc' "`_shunit_prepForSourcing 'abc'`"
}
testEscapeCharInStr() {
while read -r desc char str want; do
got=`_shunit_escapeCharInStr "${char}" "${str}"`
assertEquals "${desc}" "${want}" "${got}"
done <<'EOF'
backslash \ '' ''
backslash_pre \ \def \\def
backslash_mid \ abc\def abc\\def
backslash_post \ abc\ abc\\
quote " '' ''
quote_pre " "def \"def
quote_mid " abc"def abc\"def
quote_post " abc" abc\"
string $ '' ''
string_pre $ $def \$def
string_mid $ abc$def abc\$def
string_post $ abc$ abc\$
EOF
# TODO(20170924:kward) fix or remove.
# actual=`_shunit_escapeCharInStr "'" ''`
# assertEquals '' "${actual}"
# assertEquals "abc\\'" `_shunit_escapeCharInStr "'" "abc'"`
# assertEquals "abc\\'def" `_shunit_escapeCharInStr "'" "abc'def"`
# assertEquals "\\'def" `_shunit_escapeCharInStr "'" "'def"`
# # Must put the backtick in a variable so the shell doesn't misinterpret it
# # while inside a backticked sequence (e.g. `echo '`'` would fail).
# backtick='`'
# actual=`_shunit_escapeCharInStr ${backtick} ''`
# assertEquals '' "${actual}"
# assertEquals '\`abc' \
# `_shunit_escapeCharInStr "${backtick}" ${backtick}'abc'`
# assertEquals 'abc\`' \
# `_shunit_escapeCharInStr "${backtick}" 'abc'${backtick}`
# assertEquals 'abc\`def' \
# `_shunit_escapeCharInStr "${backtick}" 'abc'${backtick}'def'`
}
testEscapeCharInStr_specialChars() {
# Make sure our forward slash doesn't upset sed.
assertEquals '/' "`_shunit_escapeCharInStr '\' '/'`"
# Some shells escape these differently.
# TODO(20170924:kward) fix or remove.
#assertEquals '\\a' `_shunit_escapeCharInStr '\' '\a'`
#assertEquals '\\b' `_shunit_escapeCharInStr '\' '\b'`
}
# Test the various ways of declaring functions.
#
# Prefixing (then stripping) with comment symbol so these functions aren't
@ -223,23 +212,61 @@ testExtractTestFunctions() {
#func_with_test_vars() {
# testVariable=1234
#}
## Function with keyword but no parenthesis
#function test6 { echo '6'; }
## Function with keyword but no parenthesis, multi-line
#function test7 {
# echo '7';
#}
## Function with no parenthesis, '{' on next line
#function test8
#{
# echo '8'
#}
## Function with hyphenated name
#test-9() {
# echo '9';
#}
## Function without parenthesis or keyword
#test_foobar { echo 'hello world'; }
## Function with multiple function keywords
#function function test_test_test() { echo 'lorem'; }
EOF
actual=`_shunit_extractTestFunctions "${f}"`
assertEquals 'testABC test_def testG3 test4 test5' "${actual}"
assertEquals 'testABC test_def testG3 test4 test5 test6 test7 test8 test-9' "${actual}"
}
# Test that certain external commands sometimes "stubbed" by users
# are escaped. See Issue #54.
testProtectedCommands() {
for c in mkdir rm cat chmod; do
grep "^[^#]*${c} " "${TH_SHUNIT}" | grep -qv "command ${c}"
assertFalse "external call to ${c} not protected somewhere" $?
done
grep '^[^#]*[^ ] *\[' "${TH_SHUNIT}" | grep -qv 'command \['
assertFalse "call to [ ... ] not protected somewhere" $?
grep '^[^#]* *\.' "${TH_SHUNIT}" | grep -qv 'command \.'
assertFalse "call to . not protected somewhere" $?
testColors() {
while read -r cmd colors desc; do
SHUNIT_CMD_TPUT=${cmd}
want=${colors} got=`_shunit_colors`
assertEquals "${desc}: incorrect number of colors;" \
"${got}" "${want}"
done <<'EOF'
missing_tput 16 missing tput command
mock_tput 256 mock tput command
EOF
}
testColorsWitoutTERM() {
SHUNIT_CMD_TPUT='mock_tput'
got=`TERM='' _shunit_colors`
want=16
assertEquals "${got}" "${want}"
}
mock_tput() {
if [ -z "${TERM}" ]; then
# shellcheck disable=SC2016
echo 'tput: No value for $TERM and no -T specified'
return 2
fi
if [ "$1" = 'colors' ]; then
echo 256
return 0
fi
return 1
}
setUp() {
@ -249,6 +276,9 @@ setUp() {
# Reconfigure coloring as some tests override default behavior.
_shunit_configureColor "${SHUNIT_COLOR_DEFAULT}"
# shellcheck disable=SC2034,SC2153
SHUNIT_CMD_TPUT=${__SHUNIT_CMD_TPUT}
}
oneTimeSetUp() {

@ -0,0 +1,70 @@
#! /bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shUnit2 unit tests for `shopt` support.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Load test helpers.
. ./shunit2_test_helpers
# Call shopt from a variable so it can be mocked if it doesn't work.
SHOPT_CMD='shopt'
testNullglob() {
isShoptWorking || startSkipping
nullglob=$(${SHOPT_CMD} nullglob |cut -f2)
# Test without nullglob.
${SHOPT_CMD} -u nullglob
assertEquals 'test without nullglob' 0 0
# Test with nullglob.
${SHOPT_CMD} -s nullglob
assertEquals 'test with nullglob' 1 1
# Reset nullglob.
if [ "${nullglob}" = "on" ]; then
${SHOPT_CMD} -s nullglob
else
${SHOPT_CMD} -u nullglob
fi
unset nullglob
}
oneTimeSetUp() {
th_oneTimeSetUp
if ! isShoptWorking; then
SHOPT_CMD='mock_shopt'
fi
}
# isShoptWorking returns true if the `shopt` shell command is available.
# NOTE: `shopt` is not defined as part of the POSIX standard.
isShoptWorking() {
# shellcheck disable=SC2039,SC3044
( shopt >/dev/null 2>&1 );
}
mock_shopt() {
if [ $# -eq 0 ]; then
echo "nullglob off"
fi
return
}
# Load and run shUnit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT="$0"
. "${TH_SHUNIT}"

@ -3,8 +3,9 @@
#
# shUnit2 unit test for standalone operation.
#
# Copyright 2010-2017 Kate Ward. All Rights Reserved.
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
@ -13,13 +14,10 @@
# the name of a unit test script, works. When run, this script determines if it
# is running as a standalone program, and calls main() if it is.
#
### ShellCheck http://www.shellcheck.net/
# $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006
# Disable source following.
# shellcheck disable=SC1090,SC1091
ARGV0="`basename "$0"`"
ARGV0=$(basename "$0")
# Load test helpers.
. ./shunit2_test_helpers
@ -32,7 +30,7 @@ main() {
${TH_SHUNIT} "${ARGV0}"
}
# Are we running as a standalone?
if [ "${ARGV0}" = 'shunit2_test_standalone.sh' ]; then
if [ $# -gt 0 ]; then main "$@"; else main; fi
# Run main() if are running as a standalone script.
if [ "${ARGV0}" = 'shunit2_standalone_test.sh' ]; then
main "$@"
fi

@ -2,25 +2,27 @@
#
# shUnit2 unit test common functions
#
# Copyright 2008 Kate Ward. All Rights Reserved.
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
### ShellCheck (http://www.shellcheck.net/)
# Commands are purposely escaped so they can be mocked outside shUnit2.
# shellcheck disable=SC1001,SC1012
# expr may be antiquated, but it is the only solution in some cases.
# shellcheck disable=SC2003
# $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006
# Exit immediately if a simple command exits with a non-zero status.
set -e
# Treat unset variables as an error when performing parameter expansion.
set -u
# Set shwordsplit for zsh.
\[ -n "${ZSH_VERSION:-}" ] && setopt shwordsplit
[ -n "${ZSH_VERSION:-}" ] && setopt shwordsplit
#
# Constants.
@ -33,11 +35,11 @@ TH_SHUNIT=${SHUNIT_INC:-./shunit2}; export TH_SHUNIT
# non-empty value to enable debug output, or TRACE to enable trace
# output.
TRACE=${TRACE:+'th_trace '}
\[ -n "${TRACE}" ] && DEBUG=1
\[ -z "${TRACE}" ] && TRACE=':'
[ -n "${TRACE}" ] && DEBUG=1
[ -z "${TRACE}" ] && TRACE=':'
DEBUG=${DEBUG:+'th_debug '}
\[ -z "${DEBUG}" ] && DEBUG=':'
[ -z "${DEBUG}" ] && DEBUG=':'
#
# Variables.
@ -50,12 +52,12 @@ th_RANDOM=0
#
# Logging functions.
th_trace() { echo "${MY_NAME}:TRACE $*" >&2; }
th_debug() { echo "${MY_NAME}:DEBUG $*" >&2; }
th_info() { echo "${MY_NAME}:INFO $*" >&2; }
th_warn() { echo "${MY_NAME}:WARN $*" >&2; }
th_error() { echo "${MY_NAME}:ERROR $*" >&2; }
th_fatal() { echo "${MY_NAME}:FATAL $*" >&2; }
th_trace() { echo "test:TRACE $*" >&2; }
th_debug() { echo "test:DEBUG $*" >&2; }
th_info() { echo "test:INFO $*" >&2; }
th_warn() { echo "test:WARN $*" >&2; }
th_error() { echo "test:ERROR $*" >&2; }
th_fatal() { echo "test:FATAL $*" >&2; }
# Output subtest name.
th_subtest() { echo " $*" >&2; }
@ -73,20 +75,20 @@ th_oneTimeSetUp() {
th_generateRandom() {
tfgr_random=${th_RANDOM}
while \[ "${tfgr_random}" = "${th_RANDOM}" ]; do
while [ "${tfgr_random}" = "${th_RANDOM}" ]; do
# shellcheck disable=SC2039
if \[ -n "${RANDOM:-}" ]; then
if [ -n "${RANDOM:-}" ]; then
# $RANDOM works
# shellcheck disable=SC2039
tfgr_random=${RANDOM}${RANDOM}${RANDOM}$$
elif \[ -r '/dev/urandom' ]; then
elif [ -r '/dev/urandom' ]; then
tfgr_random=`od -vAn -N4 -tu4 </dev/urandom |sed 's/^[^0-9]*//'`
else
tfgr_date=`date '+%H%M%S'`
tfgr_random=`expr "${tfgr_date}" \* $$`
unset tfgr_date
fi
\[ "${tfgr_random}" = "${th_RANDOM}" ] && sleep 1
[ "${tfgr_random}" = "${th_RANDOM}" ] && sleep 1
done
th_RANDOM=${tfgr_random}
@ -127,12 +129,13 @@ th_assertTrueWithNoOutput() {
th_stdout_=$3
th_stderr_=$4
assertTrue "${th_test_}; expected return value of zero" "${th_rtrn_}"
\[ "${th_rtrn_}" -ne "${SHUNIT_TRUE}" ] && \cat "${th_stderr_}"
assertFalse "${th_test_}; expected no output to STDOUT" \
"[ -s '${th_stdout_}' ]"
assertFalse "${th_test_}; expected no output to STDERR" \
"[ -s '${th_stderr_}' ]"
assertEquals "${th_test_}: expected return value of true" "${SHUNIT_TRUE}" "${th_rtrn_}"
assertFalse "${th_test_}: expected no output to STDOUT" "[ -s '${th_stdout_}' ]"
assertFalse "${th_test_}: expected no output to STDERR" "[ -s '${th_stderr_}' ]"
# shellcheck disable=SC2166
if [ -s "${th_stdout_}" -o -s "${th_stderr_}" ]; then
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_
}
@ -152,13 +155,13 @@ th_assertFalseWithOutput()
th_stdout_=$3
th_stderr_=$4
assertFalse "${th_test_}; expected non-zero return value" "${th_rtrn_}"
assertTrue "${th_test_}; expected output to STDOUT" \
"[ -s '${th_stdout_}' ]"
assertFalse "${th_test_}; expected no output to STDERR" \
"[ -s '${th_stderr_}' ]"
\[ -s "${th_stdout_}" -a ! -s "${th_stderr_}" ] || \
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
assertNotEquals "${th_test_}: expected non-true return value" "${SHUNIT_TRUE}" "${th_rtrn_}"
assertTrue "${th_test_}: expected output to STDOUT" "[ -s '${th_stdout_}' ]"
assertFalse "${th_test_}: expected no output to STDERR" "[ -s '${th_stderr_}' ]"
# shellcheck disable=SC2166
if ! [ -s "${th_stdout_}" -a ! -s "${th_stderr_}" ]; then
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_
}
@ -177,13 +180,13 @@ th_assertFalseWithError() {
th_stdout_=$3
th_stderr_=$4
assertFalse "${th_test_}; expected non-zero return value" "${th_rtrn_}"
assertFalse "${th_test_}; expected no output to STDOUT" \
"[ -s '${th_stdout_}' ]"
assertTrue "${th_test_}; expected output to STDERR" \
"[ -s '${th_stderr_}' ]"
\[ ! -s "${th_stdout_}" -a -s "${th_stderr_}" ] || \
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
assertFalse "${th_test_}: expected non-zero return value" "${th_rtrn_}"
assertFalse "${th_test_}: expected no output to STDOUT" "[ -s '${th_stdout_}' ]"
assertTrue "${th_test_}: expected output to STDERR" "[ -s '${th_stderr_}' ]"
# shellcheck disable=SC2166
if ! [ ! -s "${th_stdout_}" -a -s "${th_stderr_}" ]; then
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_
}
@ -193,8 +196,8 @@ th_assertFalseWithError() {
# they are either written to disk, or recognized as an error the file is empty.
th_clearReturn() { cp /dev/null "${returnF}"; }
th_queryReturn() {
if \[ -s "${returnF}" ]; then
th_return=`\cat "${returnF}"`
if [ -s "${returnF}" ]; then
th_return=`cat "${returnF}"`
else
th_return=${SHUNIT_ERROR}
fi
@ -204,22 +207,26 @@ th_queryReturn() {
# Providing external and internal calls to the showOutput helper function.
th_showOutput() { _th_showOutput "$@"; }
_th_showOutput() {
_th_return_=$1
_th_stdout_=$2
_th_stderr_=$3
if isSkipping; then
return
fi
_th_return_="${1:-${returnF}}"
_th_stdout_="${2:-${stdoutF}}"
_th_stderr_="${3:-${stderrF}}"
isSkipping
if \[ $? -eq "${SHUNIT_FALSE}" -a "${_th_return_}" != "${SHUNIT_TRUE}" ]; then
if \[ -n "${_th_stdout_}" -a -s "${_th_stdout_}" ]; then
if [ "${_th_return_}" != "${SHUNIT_TRUE}" ]; then
# shellcheck disable=SC2166
if [ -n "${_th_stdout_}" -a -s "${_th_stdout_}" ]; then
echo '>>> STDOUT' >&2
\cat "${_th_stdout_}" >&2
cat "${_th_stdout_}" >&2
echo '<<< STDOUT' >&2
fi
if \[ -n "${_th_stderr_}" -a -s "${_th_stderr_}" ]; then
# shellcheck disable=SC2166
if [ -n "${_th_stderr_}" -a -s "${_th_stderr_}" ]; then
echo '>>> STDERR' >&2
\cat "${_th_stderr_}" >&2
fi
if \[ -n "${_th_stdout_}" -o -n "${_th_stderr_}" ]; then
echo '<<< end output' >&2
cat "${_th_stderr_}" >&2
echo '<<< STDERR' >&2
fi
fi

@ -3,7 +3,7 @@
#
# Unit test suite runner.
#
# Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Copyright 2008-2020 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
#
# Author: kate.ward@forestent.com (Kate Ward)
@ -12,6 +12,20 @@
# This script runs all the unit tests that can be found, and generates a nice
# report of the tests.
#
### Sample usage:
#
# Run all tests for all shells.
# $ ./test_runner
#
# Run all tests for single shell.
# $ ./test_runner -s /bin/bash
#
# Run single test for all shells.
# $ ./test_runner -t shunit_asserts_test.sh
#
# Run single test for single shell.
# $ ./test_runner -s /bin/bash -t shunit_asserts_test.sh
#
### ShellCheck (http://www.shellcheck.net/)
# Disable source following.
# shellcheck disable=SC1090,SC1091
@ -25,8 +39,10 @@
RUNNER_LOADED=0
RUNNER_ARGV0=`basename "$0"`
RUNNER_SHELLS='/bin/sh ash /bin/bash /bin/dash /bin/ksh /bin/pdksh /bin/zsh'
RUNNER_SHELLS='/bin/sh ash /bin/bash /bin/dash /bin/ksh /bin/mksh /bin/zsh'
RUNNER_TEST_SUFFIX='_test.sh'
true; RUNNER_TRUE=$?
false; RUNNER_FALSE=$?
runner_warn() { echo "runner:WARN $*" >&2; }
runner_error() { echo "runner:ERROR $*" >&2; }
@ -36,7 +52,7 @@ runner_usage() {
echo "usage: ${RUNNER_ARGV0} [-e key=val ...] [-s shell(s)] [-t test(s)]"
}
_runner_tests() { echo ./*${RUNNER_TEST_SUFFIX} |sed 's#./##g'; }
_runner_tests() { echo ./*${RUNNER_TEST_SUFFIX} |sed 's#\./##g'; }
_runner_testName() {
# shellcheck disable=SC1117
_runner_testName_=`expr "${1:-}" : "\(.*\)${RUNNER_TEST_SUFFIX}"`
@ -114,6 +130,7 @@ for key in ${env}; do
done
# Run tests.
runner_passing_=${RUNNER_TRUE}
for shell in ${shells}; do
echo
@ -127,20 +144,20 @@ EOF
# Check for existence of shell.
shell_bin=${shell}
shell_name=''
shell_present=${FALSE}
shell_present=${RUNNER_FALSE}
case ${shell} in
ash)
shell_bin=`which busybox |grep -v '^no busybox'`
[ $? -eq "${TRUE}" -a -n "${shell_bin}" ] && shell_present="${TRUE}"
shell_bin="${shell_bin} ash"
shell_bin=`command -v busybox`
[ $? -eq "${RUNNER_TRUE}" ] && shell_present="${RUNNER_TRUE}"
shell_bin="${shell_bin:+${shell_bin} }ash"
shell_name=${shell}
;;
*)
[ -x "${shell_bin}" ] && shell_present="${TRUE}"
[ -x "${shell_bin}" ] && shell_present="${RUNNER_TRUE}"
shell_name=`basename "${shell}"`
;;
esac
if [ "${shell_present}" -eq "${FALSE}" ]; then
if [ "${shell_present}" -eq "${RUNNER_FALSE}" ]; then
runner_warn "unable to run tests with the ${shell_name} shell"
continue
fi
@ -157,9 +174,18 @@ EOF
# ${shell_bin} needs word splitting.
# shellcheck disable=SC2086
( exec ${shell_bin} "./${t}" 2>&1; )
shell_passing=$?
if [ "${shell_passing}" -ne "${RUNNER_TRUE}" ]; then
runner_warn "${shell_bin} not passing"
fi
test "${runner_passing_}" -eq ${RUNNER_TRUE} -a ${shell_passing} -eq ${RUNNER_TRUE}
runner_passing_=$?
done
done
return ${runner_passing_}
}
# Execute main() if this is run in standalone mode (i.e. not from a unit test).
[ -z "${SHUNIT_VERSION}" ] && main "$@"
if [ -z "${SHUNIT_VERSION}" ]; then
main "$@"
fi

Loading…
Cancel
Save