The Translation of Scripture

I have been crucified with Christ. It is no longer I who live, but it is Christ who lives in me. And the life I life I live in the flesh I live by faith in the Son of God, who loved me and gave himself for me. – Galatians 2:20

Understanding the context of particular, individual Bible verses can sometimes be difficult to understand. They can be taken out of context to satisfy the point that a person is making, or to help a person who is in despair. Is this, I ask you, the way that the Bible should be read and understood? Let me say that this question is not in saying that there is a right or wrong to the question, just one that gets one to question. Let me take a couple different perspectives on this and see the benefits.

Memorization

Baby steps! That is what we are told when we learn something new. When solving a problem we are told to break the problem down into smaller problems. These are valid solutions and would agree that this holds a great deal of truth to each of these. When it comes to memorization of the Bible, we are told that memorizing Bible verses will help us to have these particular verses at hand on a day to day basis, or when witnessing to another person. I recently started back up again with memorizing verses such as the one above. Different from previous times of learning to memorize Bible verses, this experience has taken me down a different path. Starting to memorize this verse led me to break down the context of this verse and wanting to know that context in full detail. Doing so has helped me to gain more of an understanding of the use of that verse and grasping the context for easier memorization. Context I believe has been the key to this type of memorization.

Context

I started above to start to talk about context when learning a particular memory verse. In the past I would find it difficult to memorize a verse simply because I did not understand the context or the meaning of the verse. Sure it related to me and was an inspiration to certain areas of my life. But, what did it mean at the time that it was written? How would that meaning relate to me today? That is where the context matters. Knowing the context that led to that verse being included in the Bible for all of these years has meaning behind it for all that read and/or share the Bible.

When I think of how to share my life with others, I always end up trying to understand what it is that I would share with others. Would sharing a memory verse that I have from the Bible make that a difference during that time of witnessing? I have concluded that it would not. I have also concluded that it would not because I can not explain the meaning of the verse I am sharinfg with someone if I myself do not know the meaning. Memorizing a Bible verse has a great deal of significance in building faith, and can lead to a more knowledgable piece by piece of the Bible. Yet, if you take away sermons, other books, talking with other fellow Christians, what would those pieces be in understanding the context of the Bible? What relevance does the context of that book in the Bible that has the chapter with the verse in it have on today?

These have been questions that I have had since beginning to start to memorize verses again from the Bible. I have concluded that without knowing the context that surrounds the verse will not allow me to fully grasp what it is that I am trying to memorize. This verse above is a good example. If I was to memorize this verse without knowing the context that leads up to the verse, I would not be fully understanding of the verse and the meaning it has to my life and the lives of those who God will bring into my life.

With that said, let me give you the context of this verse.

Galatians 2:20 – The Context

Paul – formerly known as Saul – was with those is Galatia. Saul was the leader of those who gave there daily lives to persecuting Christians. God transformed Saul and led to the changing of Saul becoming a Christian and having a powerful testimony to share with others. When he became a Christian and accepted Christ as the Son of God, he was given the name of Paul. Paul was one of the apostles to Jesus. He was with Jesus when Jesus started his ministry and was there after Jesus was persecuted and killed. Another of the apostles of Jesus was Peter. The two of them went there separate ways after Jesus was crucified. In the beginning of Galatians the opening begins with an introduction to Paul being led to Galatia. The first chapter was devoted towards Paul sharing with those there that they have led astray from the gospel and have found another gospel to worship. I found myself wondering what it is that would have led them astray? Then I realized I can look all around me today and see the same thing happening. The second chapter started the context that surrounds this verse. Paul confronts Peter about how Peter is leading his life and living his live. These men were eager to share everything about Jesus and being there king while He was here on earth. Now that He was no longer in the flesh, was there any reason to lead a different life? This is what Paul was confronting Peter about. Peter was distancing himself from the Gentiles – those who did not know the laws of the religion. He would do this when he was in the presence of others that were not Gentiles. He wanted to not be seen as one who communicated with them nor one who was trying to share the gospel of Jesus with them. Paul did not understand what the reasoning is behind this. So when he confronted Peter, he shared with him what it was that he went through and had been going through to continue to share the gospel of Jesus Christ. Each and every day when Paul would go somewhere, anyone and everyone would at first be afraid of him because of his past. It was not until he was in their presence that they realized that he had indeed changed. This is where I can understand the beginning of this verse where he says that he has been crucified with Christ. Most anywhere that Christ would go to, there would always be those who were weary of Him and what we was proclaiming/teaching.

Adding onto the paragraph above I can see where Paul is sharing with Peter that he feels that it is no longer he who lives in the body but Christ instead. If each and every place that Paul went were afraid of him, and then felt a peace about him after meeting him, this must have been a feeling of peace for him as well. This peace could not have been from how he felt previously about the Christian believer. No, it was the peace that Christ said would be there when he left and was raised to Heaven. He – Jesus – promised to leave the Holy Spirit with anyone who believed in Jesus. This was the leading of Paul to believe that it was Christ that was sharing this peace with those who were afraid of Paul when he would enter a town.

This reading of the chapter 2 gave me the context of what led to this verse. The meaning made it clearer for me to relate how this verse gives context to my life. Without this context, I do not believe I would fully be grasping the reason for this verse that I am wanting to memorize. Having the context gives me more confidence that if there is a time that God uses me to share the Gospel with another person, the sharing of this verse at the right time would be clearer because of the context behind it.

Summary

Memory of the Bible is without a doubt the way to understanding our relationship with God, and the love that God has for each of us. Memorization of verses can be a small step in knowing about this love and how to love God in return. Understanding the context behind the verse is what can put all of the pieces together that will give a solid footprint that will last forever in your heart, mind, and possibly with another person that you share it with. This is what I have come to realize in my latest journey in memorization of the Bible. God bless you!

 

The Story of the Shortstop in Baseball

Each baseball team is made up of a selection of players that have positions that they have been groomed to play. At any given time nine of the players are on the field using their abilities to help the team win. On the bench the team has others players that are utility players that perform small specific tasks when the time could come in the game when those speciality players will be called upon the perform a task. The players on the field are well skilled in the position that each on of them is playing. Each knows his role and knows the importance of focusing solely on that role. The shortstop knows what is required to make the plays at that position. He does now have any worries about having to know how to be the center fielder, the catcher, or the pitcher. His duty is to be the shortstop and that is it.

The shortstop. One of the most important positions in baseball. Speed, strong arm, wise in understanding game situations when they arise, and a good bat at the plate. These abilities are what make a player a shortstop. Some of these characteristics represent other players on the field, but this one position is repesentitive of the abilities listed above. Greg is right now the current shortstop. He was classified as a shortstop a few years ago when he showed that he had these abilities. When he showed that he was strong in each of these, it was decided that he was going to be a shortstop. So from that point on Greg has been classified as a shortstop wherever he has gone or whenever a scout would come watch him play. One of the scouts determined that Greg would be the shortstop that his team needed. They drafted him to play and now they had the new shortstop they had been looking for.

Leading up to Greg being drafted, and years before, it was decided in Greg’s mind that he did in fact want to be the shortstop. He felt like he had the talent to be a shortstop and nobody could argue that when his abilities where shown on the field. Each and every day before games and after games Greg would perform specific drills that would make him and better player at his position. these drills did not focus on any other position but shortstop. The drills that he would take part in were taking ground balls so that he was consistent in stopping a ground ball, catching a line drive, or anything else that was hit his way. He took batting practice. During this time, he would get fast balls, curve balls, change ups, hitting to left field, right field, center field, bunting, etc. He would also go to the weight room and do specific exercises that were going to make him stronger to be a shortstop. All of these methods were used so that Greg would be classified as one of the best shortstops when he was on the field in a game.

When the season was over it was time for the draft. We know that he was drafted from what I wrote earlier in this story. Might be hard to have followed along, but the drama has already unfolded. What has not, is the arguments of what it was that would make him a solid draft pick. When Dan, the scout, went to management to give his arguments as to what makes Greg the shortstop they were looking for, the arguments were strong enough for management to agree with Dan that Greg was the shortstop they needed. Dan argued that he had the speed, the arm, the ability to hit for average, get on base, and occasional home runs that a shortstop will hit. He has strong leader abilties to convey to the other position players on the field. These arguments were strong enough to showcase the abilities that Greg posesses. The team needed only one player to make their team good enough to via for the playoffs. His role, with only a single responsibility of being the shortstop they needed, was important not only to the team but to Greg being drafted as well.

The day finally came. The ESixers drafted Greg as their new shortstop. His services playing for the Backbones now were now going to the next level into pro baseball. All of the years that Greg worked on those specific skills to be a shortstop finally paid off. He now is a pro shortstop.

For those of you who are programmers that have read this story, it is specific towards you. How? A programmer should be able to take this story and make a small program that will make Greg come to life. Reading the story above should give you enough to be able to put together a small fiddle in a short period of time using keywords found within the story that will guide you along the way. Watch out for importance of how you make him come to life, because there are certain aspects that you will need in order to instantiate him into action. Lets see Greg come to life.

Annotations for Secrets of the Javascript Ninja – Part 3

It has been a real joy reading “Secrets of the JavaScript Ninja” by John Resig because of how much I have read and understood about JavaScript itself. A great of my time is spent looking at framework/library code, and not enough time looking at and understanding JavaScript itself. Nothing against the frameworks and libraries that I love to use, but there have been times when I have not understood some of the code I was using because I did not know the underlying JavaScript that it was based on. This part of my annotations was a serious read for me because of how important it is for the base of JavaScript – prototypes.

What are Prototypes

A prototype is simply an object to which the search for a particular property can be delegated to. They are a convenient means of defining properties and functionality that will be automatically accessibly to other object. One form of code reuse that also helps us conceptually deal with our programs in inheritance, extending the features of one object into another. In JavaScript, inheritance is implemented with a simple mechanism called prototyping.

Instance Properties

When a function is called as a constructor via the new operator, its context is defined as the new object instance. This means that in addition to exposing properties via the prototype, we can initialize values within the constructor via the this parameter.

Constructors

Although the constructor property of an object can be changed, doing so doesn’t have any immediate or obvious constructive purpose (though one might think of some malicious one), as its reason for being is to inform us from where the object was constructed. If the constuctor property is overwritten, the original value is simply lost.

Inheritance

Inheritance is a form of reuse in which new objects have access to properties of existing objects. This helps us avoid the need to repeat code and data across our code base.

Overriding the Constructor Property

In JavaScript, every object propety is described with a property descriptor through which we can configure the following keys:

  1. configurable – is set to true, the property descriptor of the property can be changed and the property can be deleted, and if set to false we can do neither of these things.
  2. enumerable – if set to true, the property shows up during a for … in loop over the object’s properties.
  3. value – specifies a value of the property. Defaults to undefined.
  4. writable – if set to true, the property value can be changed by using an assignment.
  5. get – defines the getter function, which will be called when we access the property. Cannot be defined in conjunction with value and writable
  6. set – defines the setter function, which will be called whenever an assignment is made to the property. Also cannot be defined in conjunction with value and writable.

Prototype Summary

  1. JavaScript objects are simple collection of named properties with values
  2. JavaScript uses prototypes. Every object can have a reference to a prototype, an object to which the search for a particular property will be delegated to, if the object itself doesn’t have the searched-for property. An object’s prototype can have its own prototype, and so on, forming a prototype chain. Every function has a prototype property that is set as the prototype of objects that it instantiates.
  3. A function’s prototype object has a constructor property pointing back to the function itself. This property is accessible to all objects instantiated with that function
  4. In JavaScript, properties have attributes (configurable, enumerable, writable). The properties can be defined by using the Object.defineProperty built in method.
  5. JavaScript ES2015 adds support for a class keyword that enables us more easily mimic class in JavaScript, Behind the scenes, prototypes are still in play!
  6. The extends keyword enables elegant inheritance

Monitoring Objects Summary

  1. We can monitor objects with getters, setters, and proxies
  2. By using accessor methods (getters and setters) we can control access to object properties using built-in Object.defineProperty method or with a special get and set syntax as parts of object literals or ES2015 classes.
  3. A get method is implicitly called whenever we try to read, while a set method is called whenever we assign value to the matching object’s property.
  4. In addition, getter methods can be used to define computed properties, properties whose value is calculated on a per request basis, while the setter methods can be used to achieve data validation and logging.
  5. Proxies are an ES2015 addition to JavaScript and are used to control other objects. Proxies enable us to define custom actions that will be executed when an object is interacted with. All interactions have to go through the proxy which has a number of traps that are triggered when a specific action occurs.
  6. Use proxies to achieve elegant logging, performance measurements, data-validation,  auto-populating exceptions, etc.
  7. Proxies are not very performant, so be careful when using them in code that gets executed a lot. We recommend that you do performance testing.

Arrays

Arrays are one of the most common data-types around and they allow us to handle collections of items. However, as with a lot of things in JavaScript, arrays also come with a little twist in that they are actually just objects.

Common Operations on Array

  1. In order to make our lives a bit easier, all JavaScript arrays have a built-in forEach method.
  2. The built-in map method constructs a completely new array, then iteratres over the input array, and for each item in the input array places exactly one item in the newly constructed array, based on the result of the callbacks that we’ve provided in the map function.
  3. The every method takes a callback that will check, for each value in our collection, whether we know the value name. The every method returns true, only if passed-in callback returns a true for every item in the array.
  4. Starting from the first array item, the some method calls the callback on each array item until an item is found, for which the callback returns a true value.
  5. To find an array item that satisfies a certain condition, we only have to use the built-in find method to which we pass a callback that will be invoked for each item in the collection until the targeted item is found.
  6. In the case where we need to find multiple items satisfying a certain condition, we can use the filter method that creates a new array with all the items that satisfy that criterion.
  7. The JavaScript engine implements a sorting algorithm, and the only thing that we have to provide is a callback which will inform the sorting algorithm about the relationship between two array items. If, for example, we have item a and item b, the callback returns a value:
    1. less than 0, then item a should come before item b
    2. equal to 0, then items a and b are on equal footing
    3. greater than 0, then item a should come after item b
  8. The reduce method works by taking the intial value (such as 0), and then calling the callback function on each array item with the result of the previous callback invocation (or the initial value) and the current array item as arguments. Finally, the result of the reduce method invocation is the result of the last callback, called on the last array item.

Dealing with Collections

  1. Since maps are collections, there’s nothing stopping us from iterating over maps with for … of loops
  2. You’re also guaranteed that these values will be visited in the order in which they were inserted (something on which you cannot rely when iterating over objects by using the for … in loop)
  3. sets are collections of unique items, and their primary purpose is to stop us from storing multiple occurences of the same object.

Array Summary

All arrays have access to a number of useful methods:

  1. The map method creates a new array, with the results of calling a callback on every element.
  2. every and some methods determine whether all or some array items satisfy a certain criterion.
  3. find and filter methods find array items that satisfy a certain condition
  4. sort method sorts an array
  5. reduce method aggregates all items in an array into a single value

Code Modularization Summary

  1. Large, monolithic code bases are far more likely to be difficult to understand and maintain than smaller, well-organized ones. One way of improving the structure and organization of our programs is to break them down into smaller, relatively loosely coupled segments or modules.
  2. Modules are larger units of organizing our code than objects and functions, and they allow us to divide our programs into clusters that somehow belong together.
  3. Modules foster reusability of code.
  4. Immediate functions are used because they create a new scope in which we can define module variables that are not visible from outside that scope.
  5. Closures are used because they enable us to keep module variables alive.
  6. The most popular pattern is the module pattern, which usually combines an immediate function with a return of a new object that represents the module’s public interface.

Annotations Summary

So, this 3 part series is a combination of the notes that I thought was important when reading the book. There can be parts of the book that I overlooked, and I am confident that this amount of information alone is enough to move forward with. My next steps are to go back through the diagrams used in the book, lots of them, and more detailed look at the code.

After I do that for a couple days, I am going to jump into one of the ES6/ES2015 books that I have been wanting to go diving into.

Annotations for Secrets of the Javascript Ninja – Part 2

This is the second part of my series of annotations that I took while reading “Secrects of the JavaScript Ninja”.

Understanding JavaScript Variables

Unlike var which defines the variable in the closest function or global lexical environment, the semantics of let and const keywords is a bit simpler: they simply define variables in the closest lexical environment (which can be a block environment, a loop environment, a function environment, or even the global environment). The effectively means that we can use let and const to define block scoped, function scoped, and global scoped variables.

Registering Identifiers within Lexical Environments

The first phase is activated whenever a new lexical environment is created. In this phase, the code is not executed, but the JavaScript engine visits and registers all declared variables and functions within the current lexical environment.

The second phase, JavaScript execution, starts after this has been accomplished; the exact behavior depending on the type of variable (let, var, const, function declaration) and the type of the current environment (global, function, or block). The process goes as follows:

  1. If we are creating a function environment, the implicit arguments identifier is created, along with all formal function parameters and their argument values. If we are dealing with a non-function environment, this step is skipped.
  2. If we are creating a global or function environment, the current code is scanned (without going into the body of other functions) for function declarations (but not function expressions or lambdas). For each discovered function declaration, a new function is created and bound to an identifier in the current environment with the function’s name. If that identifier name already exists, its value is overwritten. If we are dealing with block environments, this step is skipped.
  3. The current code is scanned for variable declarations. In the case of function and global environments, all variables declared with the keyword var defined outside other functions (but they can be placed within blocks), and all variables declared with the keywords let and const declared outside other functions and blocks are found. In the case of block environments, the code is scanned only for variables declared with the keywords let and const, directly in the current block. For each discovered variable, if the identifier doesn’t exist in the current environment, the identifier is registered and its value initialized to undefined. But if the identifier exists, then it is left with the current value.

Closures and Scopes Summary

  1. Closures allow a function to access all variables that are in scope when the function itself was defined
  2. JavaScript engines track function execution through an execution context stack (or a call stack). Every time a function is called a new function execution context is created and placed on the stack. When a function is done executing, the matching execution context is simple popped from the stack.
  3. Closures are merely a side-effect of JavaScript scoping rules, and the fact that a function can be called even when the scope in which it was created has long since been gone.

Generator Functions

  1. A generator is a function that generates a sequence of values, but not all at once, like a standard function would, but on a per-request basis.
  2. The truth is that generators are quite unlike standard functions, starting with the fact that caling a generator doesn’t really execute the generator function, instead it creates an object called an iterator.
  3. By using the yield* operator on an iterator, we basically yield to another generator.
  4. The easiest way to send data to a generator is by treating it like any other function and using function call arguments.

Generators Under the Hood

  1. Once a generator produces (or yields) a value, it suspends its execution and waits for the next request. So in a way, a generator works almost like a small program, a state machine that moves between different states.
  2. Suspended start – when the generator is created, it starts in this state. None of the generator’s code is actually executed.
  3. Executing – the state in which the actual code of the generator is executed. The execution continues either from the beginning or from where the generator was last suspended. A generator moves to this state when the matching iterator’s next method is called, and there exists code to be executed.
  4. Suspended yield – during execution, when a generator reaches a yield expression, it creates a new object carrying the return value, yields it, and suspends its execution. This is the state in which the generator is paused and is waiting to continue its execution.
  5. Completed – if during execution, the generator either runs into a return statement or runs out of code to execute, the generator moves into the completed state.

Promises

  1. A promise is a placeholder for a value that we yet don’t have, but that we will have, at some later point in time; it’s a guarantee that we’ll eventually know the result of some asynchronous computation.
  2. A problem with simple callbacks is that the code invoking the callback is usually not executed in the same step of the event loop as the code that starts the long-running task.
  3. Rejecting promises – This is a way of treating all problems that happen while working with promises in a uniform way. Regardless of how the promise was rejected, whether explicitly by calling the reject method or even implicitly, if an exception occurs, all errors and rejection reasons are simply directed to our rejected callback, making our lives as developers, a little bit easier.
  4. Multiple promises – Promise.all method takes in an array of promises, and creates a new promise that successfully resolves when all passed-in promises resolve, and rejects if even one of the promises fails. The succeed callback receives an array of succeed values, one for each of the passed in promises, in order.
  5. Manually tracking – There is no need for manually tracking everything. You just use the Promise.race method that takes an array of promises and returns a completely new promise that resolves or rejects as soon as first of the promises resolves or rejects.

Combining Generators and Promises

  1. Functions are first-class objects – we send a function as an argument to the async function
  2. Generator functions – we use their ability to suspend and resume execution
  3. Promises – they help us deal with asynchonous code
  4. Callbacks – we register success and failure callbacks on our promises
  5. Closures – the iterator, through wich we control the generator, is created in the async function and we access it, through closures, in the promise callback
  6. Async functions – they will appear in the next installment of JavaScript, ES2016. 

Generators and Promises Summary

  1. Generator functions are functions that generate sequences of values; not all at once, but on a per-request bases
  2. Unlike normal functions, generator functions can suspend and resume their execution. Once a generator has generated a value it suspends its execution, without blocking the main thread, and patiently waits for the next request
  3. A generator funciton is declared by putting a start * before the function keyword. With the body of the generator function, you can use the new yield keyword that yields a value and suspends the execution of the generator. In case we want to yield to another generator, use the yield* operator.
  4. Calling a generator function created an iterator object through which we control the execution of the generator. We request new values from the generator by using the iterator’s next method, and we can even throw execptions into the generator by calling the iterator’s throw method. In addition, the next method can be used to send in values to the generator.
  5. Promises are placholders for the results of asynchronous computations, it is a guarantee that eventually we’ll know the result of an asynchronous computation. A promise can either succeed or fail, and once it has done so, there will be no more changes.
  6. Promises significantly simplify our dealings with asynchronous tasks. We can easily deal with sequences of interdependent asynchonous steps by chaining promises by taking advantage of the then method. Parallel handling of multiple asynchronous steps is also greatly simplified, just use the Promise.all method.

That is a wrap for this part 2 of my annotations on the “Secrets of the JavaScript Ninja” book. Part 3 will go into object-orientation with prototypes. Part 4 will talk about arrays, and Part 5 will finish off with modules and handling DOM manipulation.

Annotations for Secrets of the Javascript Ninja – Part 1

I recently had a fun time reading “Secrets of the Javascript Ninja”. During the reading I decided to use some new reading techniques that I have been working on while learning to speed read. One of those techniques is making sure that you do not have information overload. This is done by understanding what it is that you are reading, skim the headings for the paragraph, read the first paragraphs, pay attention to tips, and read the summary. The rest of the content is informative but not substantial. Using this technique I was able to get the book read over a 2 day span totalling 5 hours.

Here are the annotations I took while reading the book:

Understanding the browser

The majority of the code is executed in the context of a response to some event – network events, timers, mouse movements, clicks, etc.

Executing JavaScript Code

The browser provides an API through a global object that can be used by the JavaScript engine to interact with and modify the page.

The global code is executed automatically by the JavaScript engine in a straightforward fashion, line by line, as it is encountered.

The function code, in order to be executed, has to be called by something else: either by global code, by some other function, or by the browser.

Event Handling

The browser execution environment is, at its core, based on the idea that only a single piece of code can be executed at once, the so called, single-threaded execution model.

To understand single-threaded, think of a line at the bank. Everyone gets into a single line and has to wait their turn to be “processed” by the tellers. With JavaScript though, there is only one teller window open.

The browser uses an event queue. To understand the event queue:

  1.  The browser checks the head of the event queue
  2. If there are no events, the browser keeps checking
  3. If there is an event at the head of the event queue, the browser takes it and executes the associated handler – if there is one
  4. During this execution, the rest of the events are patiently waiting in the event queue, for their turn to be processed.

Registering Event Handlers

Pretty simple annotation here in that you have to notify the browser that you are interested in an event. Plain and simple.

Debugging Code

There are 2 important aspects of debugging JavaScript

  1. Logging, which prints out what’s going on, as our code is running
  2. Breakpoints, which allow us to temporarily pause the execution of our code and explore the current state of the application.

Creating Tests

Good tests exhibit three important characteristics:

  1. Repeatability – test results should be highly reproducible. Tests run repeatedly should always produce the exact same results. If test results are nondeterministic, how would we know which results are valid and which are invalid. Additionally, reproducibility ensures our tests aren’t dependent upon external factor issues such as network or CPU loads.
  2. Simplicity – Our tests should focus on testing one thing. We should strive to remove as much HTML markup, CSS, or JavaScript as we can without disrupting the intent of the test care. The more we remove, the greater the likelihood that the test care will only be influenced by the specific code that we’re testing.
  3. Independence – Our tests should execute in isolation. We must avoid making the rests from one test dependent upon another. Breaking tests down into the smallest possible units will help us determine the exact source of a bug when an error occurs.

There are a number of approaches that can be used for constructing tests; the two primary approaches are:

  1. Deconstructive test cases – created when existing code is whittled down to isolate a problem, eliminating anything that’s not germane to the issue. This helps us to achieve the three characteristics listed previously. 
  2. Constructive test cases – we start from a known good, reduced case and build up until we’re able to reproduce the bug in question. In order to use this style of testing, we’ll need of couple of simple test files from which to build up tests, and a way to generate these new tests with a clean copy of our code.

Definitions and Arguments

In JavaScript, functions are first-class objects; that is, they coexist with, and can be treated like, and other JavaScript Object.

Defining Functions

In JavaScript, there are a couple of different ways of defining functions, which can be divided into four groups:

  1. Function declarations and function expressions
  2. Arrow functions
  3. Function constructors
  4. Generator functions – enable us to create functions which, unlike normal functions, can be exited and re-entered later in the application execution, while keeping the values of their variables across these re-entrences.

Understanding Function Invocation

Implicit function parameters: this and arguments, that are silently passed to functions, and that can be accesed just like any other explicitly named function parameter with the function’s body.

The this parameter represents the function context, the object on which our function is invoked, while the arguments parameter represents arguments that are passed in through a function call.

The this parameter: Introducing the function context

The this parameter refers to an object that’s implicitly associated with the function invocation and is termed the function context. As is turns out, what the this parameter points to isn’t, as in Java, defined only by how and where the function is defined, but it can also be heavily influenced by how it’s invoked.

Invoking Functions

There are four different ways to invoke a function, each with its own nuances:

  1. As a function
  2. As a method
  3. As a constructor
  4. Using apply or call methods

Fixing Function Contexts – Using the bind method

In addition to the methods above, every function also has access to the bind method that, in short, creates a new function with the same function body, a function whose context is always bound to a certain object, regardless of the way we invoke it.

Closures

A closure is a mechanism that allows the function to access and manipulate variables that are external to that function. Closures allow a function to access all the variables, as well as other functions, that are in scope when the function itself is defined.

When the term scope is used, it refers to the visibility of the identifiers in certain parts of a program. A scope is a part of the program where a certain name is bound to a certain variable.

Another common area in which we can use closures in when dealing with callbacks – when a function is asynchronously called at an unspecified later time.

Tracking Code Execution with Execution Contexts

A stack is one of the fundamental data structures in which you can put new items only to the top, and when you want to take existing items you also have to do it from the top. Think of a stack of trays in a cafeteria: when you want to take one for yourself, you simply pick the one from the top, and when someone from cafeteria staff has a new clean one, they simply put it on the top.

Keeping Track of Identifiers with Lexical Environments

Lexical environments are an internal implementation of the JavaScript scoping mechanism, and people often colloquially refer to them simply as scopes.

In terms of scopes, each of these code structures gets an associated lexical environment every time such code is evaluated.

This is a wrap for the first part in this series of annotation from the book “Secrets of the JavaScript Ninja”. I will be doing the next section that deals with generators, promises, and the all important prototype.

Style Guide for Pattern Structure

This outline comes from the awesome book, “Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions”. The reason for this outline is to have a place to store it when I am considering how to explore concepts in more detail using a proven guide. I believe that outlining solutions to web development problems like shown below can go along way in having ways to educate those who are reading about your proposed solution.

Each pattern follows this structure:

Name: Make sure that this indicates what the pattern does. A name should be easily identifiable in a discussion with others, whether it be in person or in comments left.

Context: Give a good example of when this pattern would help to solve the problem. Not only can it be a pattern, but it can also be context in regards to what you are explaining.

Forces: The constraints that make the problem difficult to solve can outline when it is good to use the solution.

Solution: This is where you show solution(s) to solving the problem. You can show code examples in different formats in an all in one solution or show in step by step.

Sketch: Pictures are worth a thousand words. Giving a sketch can help a reader leave with a lasting impression of your solution.

Results: This can be another part, or in combination with, your solution. When giving code examples to the end result of the solution, it is best to have a demo of the final solution. Without having the result, it could be difficult for someone to understand what your solution has solved.

Next: There might be other ways to solve the problem that can be used. This is a great way to give more links that will help someone learn more.

Examples: There might be online examples that go along with what you have shown, such as a fiddle or codepen, that linked to can give more reason as to how your solution solves a problem in a similar way that others have done.

Creating Good JS Unit Tests

I wanted to add this for my reference that I got from “Secrets of the JavaScript Ninja”. Having good unit tests are important to have quality code, yet at the same time having poorly constructed code brings no quality to testing your code. There are three characteristics of good tests:

  1. Repeatability – When running a test numerous times, the test should always produce the same exact result each time. These tests should have no dependencies on external factors for the test to pass.

  2. Simplicity – A unit is responsible for testing one thing and nothing else. To do this, any code that is not part of the unit that you are wanting to test should be removed. This will prevent any disruptions of the intent for the test in the first place. The more code that is removed, the more likely the specific code that is being tested will not be influenced by this external code.

  3. Independence – There can be numerous unit tests that are written that run in your test suite. One thing that has to be remembered is that each test must have no dependency on other tests that are running. Being able to break down code into the smallest possible unity will help to determine where the exact source of a bug can be hiding when an error does occur.

These are notes that I wanted to have available anytime I am putting together a test and want to make sure I am taking this into consideration.