JOSEPH SIEREJKO

GAME DEVELOPER : PROGRAMMER : DESIGNER

Aberrant
roles Lead Programmer • Art Director • Level Design • UI
software Unity Engine • Adobe Photoshop • Visual Studio 2015
languages C# • HLSL • XML
team size
Aberrant is a Survival Horror game currently in development. It is being developed by 2 3D Modelers and myself. It was originally 20 week project, and is now being expanded upon. My priority in this development is handling the Programming end, as well as and providing design decisions as Art Director. In addition to that, I designed the 2D assets for UI, created first person animations/cinematics in Unity, and collaborated with the team on level design. I also had the pleasure of editing the video below (We were a small team).
AI Pathfinding
Of all the systems I designed and implemented for Aberrant , the most complex, layered component was the AI Systems. The most complicated of which was reactive pathfinding
  • AI Pathfinding is based off of Unity's Navigation Mesh as well as a pseudo-random waypoint network I developed.
    • Custom A* Neural Network uses a branching tile navigation system as destinations, providing limited, but controlled choice to AI Actors.
    • Player location bias pulls enemy towards player over time (organic, but forced, encounters).
    • Player choices interrupt AI pathfinding, shifting to another AI state.
    • In addition to the network, I went over the game map and added tile costs to manipulate potential AI movement.
      • Nav Agents prioritize pathfinding over collision, so adding costs also served to keep meshes far enough from walls, avoiding any clipping.
  • Here is a sort of brief summation about initializing and running a basic patrol routine
    • As AI Actors find new Paths, they calculate whether a path is viable (locked doors, etc) pre-generation .
    • During patrol routine, AI Actor will check if next node in path crosses a closed door.
      • If the AI path comes across a door, it will run a standard interaction by passing a reference of itself to inform the door to inform what class type is trying to use the door.
        • If the door is locked, request a new path, based on any accessable nodes.
        • Else if the door is unlocked, call a Pause on the Nav Agent speed, and when enemy is stopped, open door. After door animation is complete, tell the Nav Agent to continue moving.
    • The player interaction can cause events to pull the AI Actor off its routine, which results in different behaviour, depending on the enemy type.
      • When the Enemy class loses sight or gets too far from Player, it will run a random local search on the Nav Mesh (random locations within a distance relative to its current location
    This is a representation of a basic patrol routine, without variables like the Player. The rendered line is the path, and the short line is the angle relative to the player. The line is used in conjunction with an angle to determine if player is in sight.
    An example of cost values on floor tiles. The light blue is the built-in cost, and the purple is the added cost to give more control about how Nav Agents would get to a destination.
    A hot-fix I implemented, because Collision is ignored in Nav Agents, is adding a higher cost to outer walls and corners, to avoid issues like meshes clipping through walls.
    State Machines and AI Management
    One of the successes I had with the developing the Enemy AI was having a vast array of cross-compatable States, which implemented an IEnemyState interface. These states were then slotted into a Finite State Machine (FSM) class, which made use of the Strategy Pattern for hotswapping.
    Throughout the development, where possible I minimized the Unity's built-in Update ticks, to allow more control in composite classes. To handle AI management, I passed control of updating the AI through an EnemyManager class. the EnemyManager was responsible for
    • Updating all AI classes, checking that the Enemy was currently active (Built and Awake)
    • Retains a reference to Player's position, lightly pushing AI to patrol near player over time. (Separating the Body and the AI Brain)
    • Sending Sound events to all listening Enemies.
      • To add to environmental interactions, enemies having the ability to listen for sound.
      • EnemyManager has a delegate function that is called whenever 2 physics objects make contact.
      • The magnitude total, based on it's mass and impact force, is sent to EnemyManager, which then broadcasts to all active Enemies.
      • If the Enemy's hearing range, minus obstructions, collides with the sound radius, the Enemy updates to investigate.
      • This is slightly more aggressive if the player moves too fast for too long while near the enemy.
    This is the class that manages the AI. It is shared in some other classes where needed, including getting a specific enemy group out of the enemy list. It calls each Enemy's Run(PlayerController) function for trickle down updating, lessening reliance on internal Update ticks
    using System.Collections.Generic;
    using UnityEngine;
    using System.Linq;
    using Entities.Player;
    using Utilities;
    
    namespace Entities.Enemies
    {
        public class EnemyManager : MonoBehaviour
        {
            [SerializeField]
            internal PlayerController Controller;
    
            static List<Enemy> enemies = new List<Enemy>();
    
            public static void AddEnemy(Enemy e) => enemies.Add(e);
            public static void RemoveEnemy(Enemy e) => enemies.Remove(e);
       
            //Get Enemies of a type when player uses a type specific Ability
            public static List<Enemy> GetEnemiesOfType<T>()
            {
                var query = (from enemy in enemies
                             where enemy is T
                             select enemy).ToList();
                return query;
            }
    
            void Update()
            {
                foreach (Enemy enemy in enemies)
                    //StateMachine has been built
                    if (enemy.IsBuilt && enemy.IsActive)    
                        enemy.Run(Controller); //Run AI
            }
    
            //Event Notification called from SoundObjectManager delegate function
            internal void NotifyOnObjectSoundEvent(GameObject soundObject, float impactForce)
            {
                foreach(Enemy enemy in enemies)
                    if(enemy.IsBuilt == enemy.IsActive == true)
                        enemy.ListenForSound(soundObject, impactForce);
            }
    
            //Use player location information to pull enemy towards player over time
            internal void UpdateSearchMagnetism(GameObject soundObject, float impactForce)
            {
                foreach(Enemy enemy in enemies)
                {
                    if (enemy.IsBuilt)
                    {
                        enemy.ListenForSound(soundObject, impactForce);
                        Utils.GetEntityLocation(soundObject, 
                        enemy.LastKnownRoom, enemy.LastKnownTile);
                    }
                }
            }
        }
    }
                               
    The First Person Controller and its components had a series of iterations over the course of development.
    One of the early additions was the lean and look-over-shoulder. After adding IK to the game, this took a backseat until I rebuilt the hierarchy and some code to have the camera float along with the body :
    Below is a sample of code from PlayerController.cs. it has gone through a series of updates upon adding Inverse Kinematics. One of my goals was to make the player's weighty movement a mechanic itself. I ended up separating the player camera from the body, creating a floating component, which acted as a rig, with the parent controlling the mechanics like leaning. It was a long process and a few rewrites to get the movement to feel right. Through playtesting, I was able to refine the values of look and move acceleration and speed.
    Sample of Player Movement:
                  
    //Turn player body based on camera orientation
    internal void Turn()
    {
        //Get Camera orientation, ignoring the Up/Down camera swivel
        Quaternion cameraOrientation2D = Quaternion.Euler(new Vector3(
                    transform.eulerAngles.x,
                    cameraParent.transform.eulerAngles.y,
                    cameraParent.transform.eulerAngles.z));
    
        //Player's body rotation trails camera rotation
        transform.rotation = Quaternion.Lerp(transform.rotation, cameraOrientation2D, Time.deltaTime * 6f);
    }
    
    //Get angular difference from body.forward and camera.forward
    float GetFacingAngularDifference()
    {
        Vector3 currentPosition = transform.position,
                cameraTargetPosition = cameraTarget.position;
        
        //Remove Y factor to get 2D direction
        currentPosition.y = cameraTargetPosition.y = 0;
        
        //Get Unit Direction
        Vector3 direction = (cameraTargetPosition - currentPosition).normalized, 
        
        //Get Player's forward as 2D vector
        forward = new Vector3(transform.forward.x, 0, transform.forward.z);
    
        return Vector3.Angle(forward, direction);
    }
    
    //Move the player
    internal void Move()
    {
        //Get input from current input device
        InputModule.CheckInput(this);
    
        //Slow player down if leaning
        if (CamMovement.IsLeaning)
            Walk(leanSpeedPercent);
        else if (IsSprinting && CanSprint) //Currently Sprinting
        {
            timeSprinting += Time.deltaTime; 
    
            //Start sprint audio
            if (timeSprinting > 1f)
                sprintAudio.volume += .01f;
            
            //Second Audio source ("Outtro") for sprint is still active
            if(endSprintAudio.volume > 0)
                endSprintAudio.volume -= .05f;
    
            Sprint();   //Accelerates player to sprint speed
        }
        else
        {
            //Begin end of sprint audio
            sprintAudio.volume = Mathf.Max(sprintAudio.volume - .01f, 0f);
    
            if (timeSprinting > 2f)
            {
                //BUTT TO outtro sprint audio clip if constant sprint clip is playing
                if (sprintAudio.volume > 0f && !endSprintAudio.isPlaying && CanSprint)
                {
                    endSprintAudio.Play();
                }
                //Reset for next sprint audio trigger
                else if (endSprintAudio.volume >= 1f)
                    timeSprinting = 0f;
                    
                endSprintAudio.volume = Mathf.Min(endSprintAudio.volume + .005f, 1f);
            }
            Walk(WalkSpeed); //Normal Movement
        }
        
        //Trigger sound event for enemies to hear
        if (CurrentSpeed >= WalkSpeed / 1.5f)
            SoundObjectManager.NotifyPlayerSoundTriggered(
            gameObject, WalkSoundRange);
    }
    
    Here is a section of code displaying the vertical look. I have a Utils class, which has a function to clamp the euler angles of a rotation. Currently, the normal vertical angle is set to 75°, sprint is set to 30°:
    internal void UpdateVerticalLook()
    {
        //Get the next intended angle
        float verticalAngle = cameraTargetSwivel.eulerAngles.x + (vertLookSpeed * controller.InputModule.CamSensitivity);
    
        //Clamp vertical angle between next angle and +-current angle
        Utils.ClampAngle(ref verticalAngle, -currentAngleView, currentAngleView);
    
        //Check if joystick is not moving or vertical look is close to hitting the max look angle bounds
        if (Mathf.Abs(controller.Look.x) <= Mathf.Epsilon)
        {
            vertLookSpeed = Mathf.MoveTowards(vertLookSpeed, 0f, controller.InputModule.CamLookDecel * Time.deltaTime);
        }
        else if ((vertLookSpeed >= 0 && controller.Look.x > 0) ||
            (vertLookSpeed <= 0 && controller.Look.x < 0))
            vertLookSpeed += controller.Look.x * 
            controller.InputModule.CamLookAccel * 
            Time.deltaTime;
        else
            vertLookSpeed = controller.Look.x *
            controller.InputModule.CamLookAccel * 
            Time.deltaTime;
    
        //Clamp the current speed to +-max speed
        vertLookSpeed = Mathf.Clamp(
            vertLookSpeed, 
            -CurrentLookSpeedClamp, 
            CurrentLookSpeedClamp);
    
        //Set the X angle (up/down swivel) to the current look posigion
        cameraTargetSwivel.eulerAngles = new Vector3(
            verticalAngle,
            cameraTargetSwivel.eulerAngles.y,
            cameraTargetSwivel.eulerAngles.z);
    }
    
    
    Early in preproduction, we decided that in order to give the player a fighting chance, he/she would be given a device that would have several functions, and would be upgradable over the course of the game. We really wanted a radar, and traditional to the genre, a battery-powered flashlight. We packaged it all in a single military device called the Multi-Function-Display (MFD).
    This is the abstract class MFDMode, slotted into the MultiFunctionDisplay:
                      
    namespace MFD.Modes
    {
        public abstract class MFDMode : MonoBehaviour
        {
            internal float changeModeCooldown = 0.25f;
            internal float currentChangeModeTime = 0.25f;
            internal abstract float BatteryDrain { get; }
            internal abstract string ModeName { get; }
            int currentAbilityIndex = 0;
            protected internal IMFDAbility CurrentAbility { get; protected set; }
            internal bool HasAbilities => Abilities.Count > 0;
            protected List<IMFDAbility> Abilities { get; set; }
    
            public void Install(IMFDAbility ability, MultiFunctionDisplay mFD)
            {
                Abilities.Add(ability); //Add ability to list
                Abilities[Abilities.Count - 1].Connect(mFD);    //Connect required components from MFD to ability
    
                //If no abilities have been installed yet, automatically set incoming ability as Current
                if (CurrentAbility.GetType() == typeof(EmptyMFDAbility))  
                {
                    CurrentAbility = Abilities[currentAbilityIndex = (Abilities.Count - 1)];
    
                    //Check if this mode is the current active mode before setting ability in MFD
                    if (mFD.CurrentMode.GetType() == GetType())
                        mFD.SetAbility();
                }
            }
    
            internal virtual void ActivateAbility(MultiFunctionDisplay mFD) => CurrentAbility.Activate(mFD);
    
            internal virtual void Run(IInputModule inputModule, MultiFunctionDisplay mFD)
            {
                //Ability has been added to mode
                if(Abilities.Count > 0)
                {
                    //Check input for ability swapping
                    TryChangeAbility(mFD, inputModule.ChangeAbility);
                    CurrentAbility.Run(mFD);    //Run ability core functions
                }
            }
    
            protected void TryChangeAbility(MultiFunctionDisplay mFD, int direction)
            {
                currentChangeModeTime -= Time.deltaTime;
    
                //Checks if ability axis has been moved left or right
                if (direction == 0 || Abilities.Count == 0 || currentChangeModeTime > 0) return;
                
                switch (direction)
                {
                    case 1:
                        //Ability index sets to 0 if direction moves past ability count
                        currentAbilityIndex = currentAbilityIndex == 0 ? 
                            Abilities.Count - 1 : currentAbilityIndex - 1;
                        break;
                    case -1:
                        //Ability index sets to count - 1if direction moves below 0
                        currentAbilityIndex = currentAbilityIndex == Abilities.Count - 1 ?
                            0 : currentAbilityIndex + 1;
                        break;
                }
    
                Off();  //Disable exited mode
                CurrentAbility = Abilities[currentAbilityIndex]; //Set ability   
                On();   //Enable entered mode
    
                mFD.SetAbility();   //Setup ability in MFD
                currentChangeModeTime = changeModeCooldown; //Reset cooldown
            }
    
            internal virtual void On() => CurrentAbility.On();
            internal virtual void Off() => CurrentAbility.Off();
        }
    }
    
    
    Here is the interface for the mode abilities:
    using Entities.Player;
    using System.Collections;
    
    namespace MFD.Abilities
    {
        public interface IMFDAbility
        {
            string AbilityName { get; }     //Display name on device
            float BatteryDrain { get; }     //Battery usage while selected
            bool IsOn { get; }
            bool CanUse(MultiFunctionDisplay mFD);  //Ability check
            void Run(MultiFunctionDisplay mFD);     //Constant update when ability is selected
            void Activate(MultiFunctionDisplay mFD);    //Use ability
            void On();  //Initialize before Running
            void Off(); //Initialize before Running another ability
    
            //Pass all variables needed from MFD on install
            void Connect(MultiFunctionDisplay mFD);
        }
    }
    
    A feature we wanted to get into the game was a deep interaction system. In the initial prototype, the interactions were un-animated, everything interacted with would do its job, independent of the player's model rig.
    In later iterations, a new interface layer was implemented in addition to the interface that was being used to trigger interactions. With the modular code, It took minutes to add new interactable objects.
    The Run function in InteractionController.cs (the core of all player-based interactions):
    internal void Run(PlayerController pC, Vector3 position, Vector3 forward)
    {
        //Deactivate prior to trying to get interactable
        if(CurrentInteractable != null)
            CurrentInteractable.Deactivate(pC);
    
        RaycastHit hit;
    
        //Tries to get interactable with a raycast
        if(Utils.TryGettingInteractable<IInteractable>
            (ref CurrentInteractable, out hit, position, forward,
            pC.ExclusionLayerMask, interactionDistance))
        {
            CurrentInteractable.Run(pC, hit.point); //Run the found interactable
    
            //Slow down cursor if interactable is labeled soft target
            if (CurrentInteractable.IsSoftTarget)
                OnChangeLookSpeedByInteraction(
                    ref cameraMovement.CurrentLookSpeedClamp,
                    cameraMovement.InteractionLookSpeedClamp, Time.deltaTime * 20f);
            else
                //Normal speed if not soft target
                OnChangeLookSpeedByInteraction(
                    ref cameraMovement.CurrentLookSpeedClamp,
                    cameraMovement.MaxLookSpeedClamp, Time.deltaTime * 5f);
    
            //If player input uses interactable
            if (pC.InputModule.IsInteracting)
            {
                CurrentInteractable.Deactivate(pC); //Deactivate interactable before trying to use
                TryInteracting(pC, position, forward); //Activate interactable
            }   
            //Specific code to Door Interactable for peek function
            else if (pC.InputModule.IsInteractingAlternate && 
                CurrentInteractable.GetType() == typeof(DoorInteractable))
            {
                CurrentInteractable.Deactivate(pC); 
                TryInteracting(pC, position, forward, true);
            }
        }
        else
        {
            //Send to default look speed if interactable not found
            OnChangeLookSpeedByInteraction(
                ref cameraMovement.CurrentLookSpeedClamp,
                cameraMovement.MaxLookSpeedClamp, Time.deltaTime * 5f);
        }
    }
    
    The Interaction interface class (sans-IK functionality):
    namespace World.Interactables
    {
        public enum InteractableTextAxis { X, Y, Z }    //World Text alignment
    
        public interface IInteractable
        {
            void Activate(PlayerController pC, InteractionController iC, bool isAlternate); //Use Interactable
            void Run(PlayerController pC, Vector3 rayHitPoint);     //Runs when focused on
            void RunOnCommand(PlayerController pC, InteractionController iC);   //Run in specific contexts
            void Deactivate(PlayerController pC);   //Called when leaving focus
            bool IsSoftTarget { get; }  //Slows camera speed when true
        }
    }
    
    I was also in charge of designing and programming all UI. Our goal early on was immersive gameplay. For us, this required a way to convey all in-game information without a screen-space User interface. With 2 exceptions, all player information is conveyed on the MFD that the player holds. Below are some video examples of my UI work
    The Bio-Scanner UI uses masks to keep the ping ui bound to the MFD screen. Also displayed in this video is swapping modes and abilities, which updates the MFD images and text. The text also has a function that chooses random characters in fast speed...just for a bit of aesthetic polish
    Code from the Utils class demonstrating how the text is generated with procedural printing
                                         
    //*******************************************************//
    #region Text Effects
    
    public static IEnumerator GenerateString(Text text, string generateGoal, int maxMutations, AudioSource source = null)
    {
        System.Random random = new System.Random();
        int pos = 0;
        text.text = "";
    
        while (text.text != generateGoal) 
        {
            text.text += " ";
            char[] randomChars = text.text.ToCharArray(); //Separate chars
    
            //Set random iterations amount per char
            int mutationCount = random.Next(1, maxMutations);
    
            for (int i = pos; i < text.text.Length; i++)
            {
                for (int j = 0; j < mutationCount; j++)
                {
                    yield return new WaitForEndOfFrame();  
    
                    //Assign a temp random character
                    randomChars[i] = (char)random.Next('a', 'z'); 
    
                    //Display current state of generation in-game
                    text.text = new string(randomChars); 
                }
            }
            randomChars[pos] = generateGoal.ToCharArray()[pos];
            pos++;    //Next character in array
    
            //Play audio clip
            if (source != null && !source.isPlaying)
                source.Play();
    
            //Current state, only displayed once mutations of current index are complete
            text.text = new string(randomChars);
        }
    }
    
    public static IEnumerator OnWriteTextOverTime(Text text, string goal, float time)
    {
        float resetTime = time;
        char[] goalAsChars = goal.ToArray();
    
        for (int i = 0; i < goalAsChars.Length;) {
    
            //Count down and compare
            if ((time -= Time.deltaTime) <= 0)
            {
                time = resetTime;   //Set new timer
                text.text += goalAsChars[i]; //Add next index to text
                i++;    //Go to next letter when timer expires
            }
            yield return new WaitForEndOfFrame();
        }
    }
    
    //Add a '0' to text to keep text 2 places
    public static void UpdateCountText(Text text, int count) =>
        text.text = count < 10 ?
            "0" + count.ToString() : count.ToString();
    
    #endregion
    //*******************************************************//
    
    
    Pause Menu UI was probably the most time-consuming in terms of Designing elements. The 2d assets were designed in Photoshop using splotch brushes, font types, and primitive shapes
    One of the exceptions to no in-game screen-space UI is the health indicator, which is a surrounding blood effect. As health gets low, the blood amount will increase. When player health is less than 15%, health the blood effect will contract and release for 10 seconds, before recovering back to 25%. All in-game audio are also distorted through audio filters, and a heart-beat audio-clip plays with contractions
    This is the "OverCharge" UI for the Overcharge Ability. When the player hovers over an object that implements IOverchargeInteractable, the UI provides feedback, with an animated ring on the mid screen, accompanied by a ping audio clip