Should games skip cutscenes entirely?

Video games as a means of telling stories have often been inspired by movies, and the clearest example of this is the use of scenes. Pac-Man is often said to be the first game to use cut scenes instead of going straight from level to level with no intermission. After the player passes each stage, a short vignette will play depicting simple scenes of Pac-Man and ghosts chasing each other.

While these little scenes are obviously a far cry from how modern cut scenes are used in games, the core concept is the same.

The game takes control of the character from the player during a sequence to enter some new information. The length of these sequences can vary widely – Konami’s Metal Gear Solid series is famous for having long cut scenes, with Metal Gear Solid 4 timing over eight hours of cut scenes, and can be used for a wide variety of purposes.

They are used to introduce characters, develop established characters, provide background, atmosphere, dialogue, and more.

However, despite their ubiquity in modern big-budget games, scenes are not necessarily the best way to tell a story in a game. There have been many highly acclaimed games that used few cutscenes, preferring instead to allow the player to control the character throughout the game.

Valve Software’s Half-Life 2 is currently the highest-rated PC game of all time on review aggregation site Metacritic, with only one scene at each end. Control is seldom taken away from the player for more than a few moments, except a sequence on rails towards the end, and much of the background information that would be displayed in a scene elsewhere is displayed through scripted events or background details in the environment. .

But are the scripted sequences that can’t be skipped from Half-Life 2 all that different from the cut scenes? After all, the player is often unable to progress until other characters finish their assigned actions and dialogues, so why not use traditional cutscenes and finish with that? To get truly unique experiences, we must first look at what makes video games unique as a medium for storytelling. Unlike cinema, where the viewer has no control over the action, or traditional board games, where player actions have very little visual results, video games provide a unique opportunity to merge interactivity and storytelling. Games like Gone Home, Dear Esther, and other games in the genre called ‘walking simulator’ have been praised as great examples of the kind of storytelling that can be unique to games.

However, for some players, these games present an entirely different problem: while they rarely take control away from the player, they also offer very little in the way of the game itself. In fact, Dear Esther has no way for the player to affect the world around them; the only action that can be taken is to walk a predetermined path until the end of the game. There is no way to ‘lose’, there is no interaction with the environment, just what amounts to a scenic tour with a superimposed narration. So despite the lack of cut scenes in the game, the near-total lack of player control and interaction in the first place means there’s little to differentiate it from a fairly lengthy cut scene.

As video games currently exist, there seems to be a kind of dichotomy between traditional storytelling and gameplay. For a game to tell a player a story, there must be some degree of limitation on what the player can do, either temporarily in the form of a scene or scripted sequence, or by limiting the actions of the players during the course of the game. . play. Perhaps the games of the future will be able to integrate a great deal of player interaction with compelling storytelling. But that won’t be achieved by taking control away from players and forcing them to watch a short movie instead of letting them play.

Website design By BotEap.com

Add a Comment

Your email address will not be published. Required fields are marked *