Implementing a Dynamic Array Stack in C++

Léonard H. Gombert
5 min readMar 4, 2021

Introduction

Last week, I took a Computer Science course, and was fascinated by the many theoretical algorithms and patterns I had learned about. Eager to put them to the test, I started a C++ project that involved a Mouse solving a maze, using one of the data structures I found most interesting : Stacks.

This short dive into what Stacks and Dynamic Arrays are is meant to be for the programmer who may have heard the terms before, but don’t know what they are, or how they operate.

I — Stack Overview

A common analogy to explain the nature of a Stack is the middle-school lunchtime classic “pile of Cafeteria Trays”.

Trips down memory lane notwithstanding, a Stack is essentially a large pile of data, upon which new data can be added and removed.

The key feature of the Stack is that it operates on a Last-In, First-Out (LIFO) basis. When running into the school Cafeteria, you wouldn’t go through the trouble of grabbing the tray at the very bottom of the pile, so why would a Stack ? The very last item that was added onto the pile is always the first one that the Stack pulls out.

You actually come across and use Stacks a lot more often than you might think. The quintessential Ctrl+Z/Cmd+Z Undo function that exists in countless applications is one such example.

Every time that you delete text in MS Word, create a new layer in Photoshop, or apply a modifier in 3DSMax, it is saved onto a Stack of your “actions”. Whenever you Undo, you go “backwards in time” by essentially removing actions from the Stack, going from most recent to the oldest.

II — Stack Implementation

The logic behind the Stack implementation I’ve written up goes as follows : The Stack is Dynamic Array, which means it will automatically resize itself every time it is filled up.

The program starts with an array of a size of 1. When an int variable is added, the Stack checks if it is at full capacity, in which case the Array doubles its size. Otherwise, it is inserted into the Array.

With each insertion, a tailPointer variable is updated, keeping tabs on the index of the last element that was added. This is so that inserts happen at the index [tailPointer + 1].

#include <iostream>
#include <string.h>
#include <stdio.h>
#include "Stack.h"

Stack::Stack()
{
currentSize = 1;
stack = new int[currentSize] {0};
tailPointer = stack[0];
}

Stack::~Stack()
{
delete[]stack;
}

void Stack::Push(int x)
{
// if the tailPointer is already at the last available position...
if (tailPointer == (currentSize - 1))
{
Resize();
Push(x);
}

// move the tailPointer up by one and add the value
else stack[++tailPointer] = x;
}

void Stack::Pop()
{
if (tailPointer == stack[0])
{
std::cout << ">> Stack was empty <<" << std::endl;
}

else
{
// remove the last element and decrement the tail pointer
stack[tailPointer--] = 0;
std::cout << ">> Stack Popped <<" << std::endl;
}
}

int Stack::Peek()
{
return stack[tailPointer];
}

void Stack::Resize()
{
int newSize = currentSize * 2;
int* newStack = new int[newSize];

// define the new array destination, and copy contents of original array,
// defining how many numbers you must copy
memcpy(newStack, stack, currentSize * sizeof(int));

currentSize = newSize;

// delete the memory pointed by the stack, and reassign it
delete[]stack;
stack = newStack;
}

This Stack is then used for the maze-solving. Here, a Mouse is looking for the Exit. It can move Up, Down, Left and Right. Every time the Mouse can move in more than one direction, it will pick one at random and keep going. If the mouse meets a dead end, it will Pop() — or remove — positions from the Stack, until it finds an unexplored direction it can move in.

Haven’t gotten around to learning HLSL yet

Eventually, the mouse will find its way to the exit. In the above gif, I’m running the code in the Visual Studio console window, and outputting results as ‘#’ to represent walls, ‘:’ to represent unexplored positions, ‘.’ to represent explored ones, ‘M’ for the Mouse, and ‘E’ for the Exit.

You’ll see that the mouse will back itself up every time that it hits a dead end, and pick up where it left off, until it reaches the Exit.

III — Computational Complexity of Dynamic Array

One final point I’d like to brush over is the Computational Complexity of something like a Dynamic Array.

“Wouldn’t it make more sense to increment the array size by 1 every time you needed more space ? Why go through the trouble of doubling it ?

When first coming across a Dynamic Array, this may be your knee-jerk reaction, and this makes sense. Doubling the size of the array is a lot of extra work for just one insertion. If my array size is 24 and I insert a 25th element, my new array is 48 elements long. That’s 23 empty spaces !

The “why” lies in what happens under the hood when you “extend” an array.

In reality, extending an array actually means creating a copy of your original array, and inserting each of your original elements into the new one, with the addition of your extra piece of data.

Your program telling you it “extended” one of your arrays

When you’re extending a very small array by 1 extra element, this isn’t really a problem, since you’re only copying a handful of values. However, when you’re adding a single element to an array of 200 elements, then you start hitting the roadblocks : your program is going to be copying 200 elements just to add one at the end. This is even more of a problem if you’re consistently extending your array, just like I am here — each consecutive insert incurs an increasing cost.

This is when averages come to save the day.

Inserting an element into your array is fast. Pretty much instantaneous. Copying, however, is not. The idea behind doubling the size of your array is that when you copy it, you give yourself a lot of overhead room, making all of your following inserts instantaneous ones.

On top of this, copy operations actually happen less and less frequently over time, because you’re exponentially increasing the size of your array — in other words, the amount of time before it is full again.

This means that on average, you are spending a lot less time copying things than you are inserting them. All in all, this amounts to the Dynamic Array having an at-worst computational complexity of the very sought-after Log(n).

Essentially, the bigger the size, the faster you insert.

Closing Thoughts

Dynamic Arrays are pretty cool ! At least I think they are, and I hope to have piqued your interest in the subject too :)

Although this small project could have run perfectly fine on a Fixed Array with a large size, I wanted to experiment with some theory that I had just picked up, and hopefully build some good habits.

Thanks for reading !

--

--

Léonard H. Gombert

C#/C++ Programmer. Home Cook. Avid Reader/Writer. Guitarist. Analogue Photography Enthusiast. Willing to bake cookies in exchange for a game of Catan.