20minJS

Episode 9 - Front-end testing with Lucas da Costa

May 04, 2022 OpenReplay Season 1 Episode 9
20minJS
Episode 9 - Front-end testing with Lucas da Costa
Show Notes Transcript Chapter Markers

In this episode Lucas and I talk about front-end testing. What is testing, what type of tests are there and what kind of tools do front-end developers have to tackle these tasks. Those are all questions that get answered, so pay attention!

We also dicuss a bit about his latest book, "Testing JavaScript Applications" by Manning.


Get in touch with Lucas:

Lucas' favorite tools:

Get the book!
Check out "Testing JavaScript Applications" and use this code to get a 35% discount during checkout: pod20minjs22

Review Us!
Don't forget to leave a review of the episode or the entire podcast on Podchasers!

Meet our host, OpenReplay:
OpenReplay is an open-source session replay tool focused on front-end developers. If you're looking for a way to understand how your users interact with your application, check out OpenReplay.



Lucas da Costa:

I think people have a misconception that when you're doing TDD, you need to ride a full blown test and then you need to ride a lot of code to pass that test. And that's not what TDD is about. That's exactly the opposite that TDD preaches. So TDD actually.

Fernando Doglio:

This is episode number nine of 20 minute JavaScript, we're one away from the big 10 .As always. This episode is hosted by OpenReplay, an open-source session replay tool for front end developers. If you're looking for a way to understand how your users interact with your application, check out OpenReplay. I'm Fernando Doglio, your host and next door neighbor, maybe I guess, for the next 20 or so minutes. And today we're going to be talking about front-end testing with Lucas da Costa. He's a Brazilian developer living in London. Who's work can be found in popular testing libraries, such as Chai and SinonJS.. He's also written a book about the topic "Testing JavaScript applications". So Lucas, welcome. Thank you for being here and please introduce yourself.

Lucas da Costa:

Yeah, thanks for the invite, Fernando. Very happy to be here, chatting with you.

Fernando Doglio:

Yeah, absolutely. Please tell our audience those who don't know you, uh, who you are a bit more detail, please.

Lucas da Costa:

Yeah. Yeah. So, um, as you said, you know, I've worked with quite a few, uh, testing libraries in the past, uh, being a maintainer of Chai and Sinon. Uh, haven't been that present day, uh, more recently, uh, but also send some contributions to Jest previously uh, And yeah, I'm a software engineer, as Elastic uh, I've written a book about testing as well. So I've written "Testing JavaScript Applications", which is published by Manning and, and yeah. Really love, um, command line interfaces, um, writing. Yeah.

Fernando Doglio:

Awesome. All right. Yeah. I, we're going to be talking about testing today. Testing, uh, web applications actually. So, um, Let's get, let's get down to it. Uh, because testing is a bit of a topic, you know, a lot of people have different opinions on it, even though I think that the literature is quite clear on what should be considered testing and whatnot, but there are a lot of developers who take that as loose guide and, um, try to follow what comes in handy of those instructions or those guides. And then, uh, forget about the rest. I want to hear what you have to say about testing in general first, and then we'll get down to the details. So let's, let's get started with the basic, how do you define testing of a web application?

Lucas da Costa:

Right. So I think that, um, I see tests as experiments. So you're creating experiments to see if your application matches your expectations. So in the same way that you create hypothesis and I don't know physics or any of natural science, and then you go and you make some experiments to see if a oh well like do these experiments contradict my hypothesis of reality. I think of testing in the same way you have hypothesis about how your application works under certain conditions. And then you do a few tests to prove that. No, not that it does, um, work exactly as you think, because you cannot prove that. But to prove that at least it does not contradict what you think. So, um, you can, for example, not test a part of application that has a bug. So you cannot, like, you can never prove that there is an input that will generate an error or anything like that, but you can at least. Try to contradict yourself and I'll be able to, so that's, that's what I think about tests. That's, that's how I see them.

Fernando Doglio:

Interesting view of it. Um, so how do you perform these tests? Uh, these, uh, how do you check these hypotheses? Because I'm, I'm, I'm sure we're not just talking about manual testing here.

Lucas da Costa:

Yeah. So when, when I test an application, what I try to do is I try to test limit conditions because given I know the, the inside out, like my application inside out, I know what rules I've written in it. I can imagine what scenarios are more likely to fail when scientists or, um, Besting reality. Let's, let's put it that way. They can't see the source code for reality. So they don't know exactly what to test for. So, because we can see what the source code is. We can try to imagine what are the error scenarios that are more likely to happen. And then we can try to test for those. We can just ask every single possible scenario, but, um, yeah, we we can test limits area. So that's how I go about doing it. I think that like, we can, we can chat about this later. Um, there's different kinds of tests, which help you in different ways. And I know, I think that's, that's how I see it. I see it as, um, trying to task for what is more likely to go wrong. That's how I, how I go. Okay.

Fernando Doglio:

Okay. And, uh, but let's not, let's chat now about the different types of tests, uh, what are the ones that you deem like absolutely necessary or not to implement on every project, because maybe that there'll be an overkill, but, uh, to at least be aware of and know that those are tools that you have at your disposal.

Lucas da Costa:

So I think that most people, um, might be familiar with the testing pyramid where you have the unit tests, the integration tests and the End-to-end tests. I also know that many people don't like that classification, uh, and there's like many people trying to come up with new terminology, but I quite like that. And not because of the terminology used for each part of the pyramid but because I think of it as a spectrum, not exactly, you know, um, many like well-defined category. So if you, if you were to think about this, just for those that aren't familiar with the pyramid on the bottom, on the largest part, you have the unit tests and then above it in the middle of the pyramid, you have integration tasks and on the top you have end-to-end tests. And the way I see the pyramid is that as you go up, you have more integration, you're testing a larger part of your application and you're simulating user behavior more closely. And as you go down to the bottom. You're starting to test individual functions. You start getting closer to the code, into individual pieces of functionality. So unit tests are very numerous. That's why they're at the base of the pyramid, but they're also very close your pure functions. They're like really tightly, coupled to your codes, to your particular implementation. Then as you go up, they get more and more loosely coupled. And you have fewer and fewer of them. So the more integrated your tests are, if you're testing end to end, test like are writing an end-to-end test because you're going to be testing a large part of application. Anyway, you need to write fewer of them. That's why they are the top. Um, and I find that these tests, they have different utilities, as I mentioned. So when I'm writing code and I just want to have fast feedback, I'll go for unit tests. And then if I'm trying to. I wouldn't say prove that my application works because you can't, as I've mentioned, but if I'm trying to simulate user behavior more closely to get stronger guarantees, you know, be more confident in my application. I'll write, end-to-end test because if you imagine that an end-to-end test is actually, you know, some browser to automation framework, opening up a browser and clicking buttons and sending requests to your backend and seeing if your web application response correctly, that's as close as it gets to what users do. That's why you can be very confident that your application will work. If you write those, you cannot be 100% sure, but you can be quite confident. However, if you add a unit asset, that's a single function in your application, you will have no idea what that application will behave correctly for users. You just know that the function, even those particular inputs returns some outputs. So, um, and there there's integration tests as well in the middle of that, uh, So integration test also give you more confidence in the unit tests and they're easier to identify and test. So there's, there's a place for them as well. So, you know, you might say that testing a react rendering, a react component and interacting with it without a browser, just, you know, simulation to Dumb JS Dom is an integration test. And, you know, I would classify it as an integration test as well. And it provides quite good quality grantees. It's not the same thing as running a real browser. Really clicking through things, but it's quite close. So, so yeah, I think, um, knowing the difference in the benefits and the pros and cons really of a feature, those can help people, you know, spend, um, as little time as possible to be as confident as they can be. Because many people think that, you know, they should write many, many tests and that, you know, just writing too many tasks will prevent errors. But in fact, one needs to balance how much time they spend writing tests versus the benefit that tests give them. So knowing the types of tests and knowing how to balance those can, can help a lot.

Fernando Doglio:

Absolutely. Absolutely. Because otherwise, uh, yeah, you just, the, the, the return on investment, let's put it that way. Uh, it starts diminishing, uh, the more tests you implement, um, Out of all these three types of, uh, the pyramid that you mentioned, um, do you consider them all mandatory for, for production ready uh, projects? And if so, do you think they are, they should be like implemented from the get-go you should start your, your project with tests in place. Uh, Or should they be like added sequentially, um, uh, throughout the, the development process? Uh, so eventually reaching production.

Lucas da Costa:

Yeah. So I do think test are essential if you're going to maintain a project in production, if you're just writing in a toy code, playing around with language, you might not want to write tests if you're going to write, you know, like just, I don't know, a hundred, 200, 300 lines of code. And it's not going to touch that again. You know, that's fine. I think of test as uh, an investment you're making, so you're investing time now, so you can spend less time in the future testing the same things over and over again. So the more time you're going to have to maintain an application for. The more tests I would write and you don't need to aim for 100% coverage and we can talk about why I don't believe in a hundred percent coverage in a bit, so you don't have to aim for that. But I would definitely add tests from the get go. And you don't need to be as through if you're not going to maintain it for, for a while, but like the more updates you do to it, and the more manual testing you're going to have to do to in the future. It would just have to spend a lot of time if you don't write tests. And also writing tests as an add-on is very difficult because when you're writing, as you worry about making your code testable, and if you don't write testable code, it's going to be a mess in the future to effect your more and more code to be able to write tests. So if you're going to have to maintain something, I will start writing tests.

Fernando Doglio:

Okay. Interesting. And when you say writing tests, you mean unit tests or you're talking about also about integration and end to end tests.

Lucas da Costa:

So I think it depends on what phase of development you're in, um, I would definitely write a few units tests so that you can get some quick feedback because. It doesn't seem like it will be hugely beneficial if you're getting started, you know, you think, oh, I'm going to be writing all these tests just for this, these small, these malfunctions functions. And I'm going to spend a lot of time doing that. But honestly, like I personally spend a lot more time when I don't write tests because whenever I do a change, if I need to do manual testing, You know, then the longer the change takes the more time I'm wasting. Of course, if I'm going to do it like as more change, you know, if I'm going to change as like the return message or if, uh, you know, a certain route or my web server, fine, I can do the string, like the, the change first and then add some tests. You know, I don't believe there's a silver bullet, you know, always write test first. But I do believe that writing unit tests, as you develop is very helpful. And then one of the things that I try to do as well is to write at least what, like a few end-to-end tests to prove that the functionality, if I'm managing some functionality or if I'm changing a significant piece of functionality works, and maybe that end-to-end tasks will be more like, I don't know, rendering the react component to JS Dom. And testing it because, you know, as I've said, that's quite close to what happens in reality. So I would try to. Write a few unit tests, so I can get fast feedback when I'm developing and then writing tests, higher up pyramid to not proof something works, but to be confident that it will work.

Fernando Doglio:

Okay. There are two topics I want to ask you about. So, um, first you said, write test while you write code. And I definitely think that's useful because of, especially because of what you said, that the goal kind of. You you're forced to make the code code testable which doesn't always happen if you don't, if if you don't consier testing your tests during development, but taking that to extreme, to the extreme, would you, uh, are you, uh, someone who recommends TDD as a practice, or do you not believe that's that provides a significant benefit to the development process?.

Lucas da Costa:

Yeah, no, I, I absolutely like to TDD and I try to always, always follow it. Like, as I've mentioned, if I'm just changing as more string, you know, I might write it right after I not, you know, I'm not gonna, um, say I always do to TDD every single situation, because I don't think there's anything that applies to every single situation. You know, the, the, the more time you're in this role, you start to understand that there's no silver bullets, you know, that it always depends, but. I try to like do TDD as much as I can, especially when I'm going to spend a significant amount of time developing a new feature or fixing a bug or something like that. So I think that the reason, many people don't like TDD. And for those that don't know, TDD, um, many people think of it as writing tests first and then writing some code to pass the test later. But it's not only that. I think people have a misconception that when you're doing TDD. You need to ride a full blown test and then you need to ride a lot of code pass the test. And that's not what it is about. That's exactly the opposite that TDD preaches. So TDD is actually about taking small steps and this comes from Kent Beck's, um, original book, basically what he says there is that, uh, TDD all about fast feedback, you know, as, as if you were trying to pull like a bucket out of a well. And then, you know, you pull and you pull and you pull and if you don't have anything to hold it, if you want to get some rest, you're going to have to do it all at once. It's going to be very tiring, you know, but if you have something to hold it, you can pull a little bit and then you can stop. Think some more, a little more, you know, and that's what TDD is about, you don't need to write a full blown test. You can ride like a, a simple test, you know, just something very, very simple, just a few lines, and then write some, the dumbass code that can pass the test. You know, so if you're checking for function returns true, for example, you can just hardcode "return true" on that function, right. Just so that, you know, at least you're are importing it correctly, is your code even running. And then you can add some more to it that some more situations to your test you know, you can try to pass this certain value like light. If this value is odd, retur true.. And then you're going to a function and implement only that. . But you still will have it failing if the value I don't know, um, is even, you know, and then you go and you implement that part. So you take small steps. You add a little bit of tests, a little bit of code, a little bit, test, a little bit of code. It's not about writing full-blown tests and then full-blown piece of code, that completely solves it. So that's how I go about it. And it was very helpful for me. And I think that I would recommend people to do the same, but yeah, like I think as I've mentioned, you know, there's no, there's no silver bullet, although I think it's very helpful. So I I'd recommend everyone to try and to read Kent's, uh, original book as well, because that's a really enlightening when it comes to really understanding TDD.

Fernando Doglio:

Definitely. Definitely. Yeah. I, I always seen that, uh, the major block developers face when trying TDD for the first time, is that a problem that you say, I mean, they're sitting there looking at the test at the empty test thinking, how do I test something that I haven't written yet? I mean, we're kind of used to think while we code, instead of thinking ahead, trying to think what's the first thing that this function has to do. And then right there, the test for it, and then go for it.

Lucas da Costa:

Yeah. I think that the big misconception is that TTD is about tests. TDD is not about tests, TDD is about taking small steps.

Fernando Doglio:

Right. But it's on their name, you know, Test Driven Development. So maybe, maybe the name, the name wasn't the best.

Lucas da Costa:

Yeah. Small step driven development.

Fernando Doglio:

That would have helped. All right. Moving on a hundred percent test coverage. You mentioned that you did not, I believe her that, so can you expand on that?

Lucas da Costa:

Um, yeah, so I don't, I don't believe in, um, 100% coverage because. It's, it's basically impossible to test all possible inputs that you can, you can give to your code when you're testing it. So if you imagine you have a function that takes a, you know, a number, how many numbers can you have in JavaScript, right? Are you going to test every single number? There is a JavaScript to see if they like, you know, the possible outputs for every single number. Uh are correct?, That's that's going to be a lot of work. And what if you pass strings, you know, strings, I mean, there's just like a, an infinite amount of input you can give to that function. There's like, if you have two parameters, can you imagine all the possible combination of arguments you can give it? You know, just, it's just not feasible. And even if you could do that, what about the interpreter that runs your code? Are you gonna test it, you know, have 100% coverage for that as well. Are you going to test, you know, the instructions that are running on your CPU? You're not going to do that. So there's one quote. Um, I really, really liked by James O. Coplien from his, uh, article, why most unit testing is waste, which explains that quite well. I think. And on that, uh, article he says: "I define 100% coverage as having examined all possible combinations of all possible paths, to all methods of a class, having reproduced every possible configuration of data bits, assessable to those methods at every machine language instruction along the path of execution. Anything else is a heuristic about which no formal claim of correctness can be made. The number of possible executions paths to function is moderate. Let's say ten, the cross product of those paths with a possible state configurations of all global data and foreign parameters is indeed very large. So the cross product of that number where the possible sequencing of methods within a class is accountably infinite. If you plug in some typical numbers, you quickly conclude that you're lucky if you get better coverage than one, and ten to the 12th power. So yeah, I think that perfectly reflects why one cannot have 100% coverage, 100% coverage. Just determines that 100% of lines in your code can run without crashing, just because they've run. But it doesn't guarantee that your application works rightly and it's impossible to do so, you know, you can have some guarantees with types and we can talk a bit more about that if you want, but like 100% coverage does not exist. What exists is 100% of your lines can run.

Fernando Doglio:

Alright. Interesting. I've never heard that definition, but I guess it makes sense. Yeah. You can't really prove a hundred percent of that. Everything will all run. Um, and then when do you draw the line? What, what, when you say all right, uh, I I've written enough unit tests to be, be able to go to sleep tonight. And I know that, uh, this thing's not gonna crash for with the first input it gets.

Lucas da Costa:

So one of the things that I'd say to people is that, you know, bugs will happen. It doesn't matter how many tests you write. It doesn't matter how many types you create and you know, how beautifully written your program is. And no matter how much manual testing you do as well, bugs will happen. Things will crash and you need to be prepared for that. So, you know, you need to have, um, to have an observable application so that you can figure out when things fail, you can get alerts quickly, and then you can fix things as quickly as possible. And you can design, you know, resilient applications so that when things fail, it's not critical, you know, but you cannot prevent failures from happening. So that, that will be my number one advice is things will fail and you need to cope with that. So that's, that's what I'm going to make. It was going to make you sleep at night. It's not that bugs will never happen. Uh, you know, they, they will happen, but, uh, when it comes to the finding how many unit tests, uh, to write, what I try to do is I try to task for the limited conditions as I mentioned. So, you know, if I have three possible paths of execution I'll, test all the three. And I'll use as like I'll, I'll try to use types as much as I can to constrain my application as well, because types is like w what really proves something can not happen because if something can happen, your program will not compile. If it it's possible that you're passing an undefined value to function that should take, you know, and number your program will not compile if the possibility even exists. So. I try to do that as much as I can. And I think that helps me constrain my tests as well. I have a talk that I presented in Moscow in 2019, which talks exactly about that, about how we can use types to diminish, uh, to, to, to reduce the amounts of possible inputs that you can give to a function so that you can reduce the number of possibilities you have to test. Um, so that's something I try to do.. And I'd say that you will never, you just never be 100% confident that things will not fail, but you can test thoroughly enough so that you can be confident that at least the most common use cases will not fail. And maybe they will only not fail for, you know, sort of time out of time because maybe your application will eventually get into a bad state and things will fail. So yeah, I'd say you need to be, uh, comfortable with dealing with failure and your application needs to be resilient so that when things fail, you're prepared.

Fernando Doglio:

Cool. Nice. Nice answer. So what I got from that is TypeScript, over JavaScript, and Monitoring over Testing.. Kind of.

Lucas da Costa:

I wouldn't say monitoring over testing, I'd say testing and making sure you've got good monitoring.

Fernando Doglio:

Going up the pyramid leaving unit tests aside for a minute. Um, do you think I'll even leaving like integration and, and, and even, um, end to end test everything that you can automate when it comes to checking that a functionality works? Do you think conceptually, if you will, that the developer who worked on that feature is the best person to test it? Or do you think that the implicit bias of you working on something and building it yourself is working against the potential problems that you can find yourself versus, you know, having someone else who's completely new to a feature and then it as a brand new user.

Lucas da Costa:

Yeah. So I, I definitely think that you will be biased if you're testing code, you've written 100%, you know, that's definitely gonna happen because you're just too close to the problem. And, you know, it doesn't need to be, um, you know, QA, professional testing, something. It can just be your colleague, you know, your fellow, uh, software engineer, uh, testing something because just the fact that they don't know, or at least they there aren't as close to the code as you are. Well, it'll just help a lot. So if someone else can test your code yeah. Go for that.

Fernando Doglio:

And on that, on that note, what do you think of the role of the testing professional QA professional, uh, on a development team? Uh, because I I've seen, uh, working as a consultant in the past. I've seen many, many companies, many development teams that don't really rely on that and just have their own developers test the whole thing, you know, from, from code to functionality and they don't, if they can, they don't spend any money or time or resources on someone who's dedicated hundred percent on to testing the application or the feature. But again, I've seen also many, many teams that have not only one, but a dedicated sub-team inside the project to fulfill that, that role essentially. So when it comes to picking, if you could, would you go for only developers testing, even, you know, cross testing, like you mentioned, or would you definitely have dedicated peoples let's put it that way to, to perform the testing?

Lucas da Costa:

So I think that, uh, QA professionals do have a very important role on teams. I think that, you know, if you have the time and, um, If you have the people it's of course useful to have your developers doing testing, because also if the developers, you know, know a lot about who the end customer is, if they deeply care about the user experience and you know, what the user, how the user feels when they use the application, I think it's fine to have developers testing the application. But I think that there's, there's a misconception that, you know, QA is just an expensive way to do something that unit tests can do. I, I don't, I don't think that. I think that the role of a QA professional compliments the role of a software engineer. You know, I think that if developers write good enough unit tasks, they write good enough integration desks and they write good enough end-to-end tests. QA professionals can perform some more proactive work. So instead of just testing the same thing on every release, They can simply explore and try to find new like a new, new bugs on your functionality, on, you know, using the application in ways that developers never thought was possible and they can be close to the users and then try to, to think like users and try to do weird things that users would do the developers didn't think of. So if developers do their part and write good tests, QA can do more proactive work. And at the end of the day, the application gets better because you're just catching more scenarios. You're just, you know, um, looking for more quality issues to improve on. So I do think QA has a very important role and it's not a QA cannot write automated tests, they can and, you know, they, they should do what. Whatever they can to make machines do the automatable work. You know, humans shouldn't be doing things too many times. I like, if you, if you're doing something too many times and a machine could do for you, you know, it it's worth considering, you know, making the machine to that for you. See, I think, I think both, both things, you know, the, the, the automated tests and the role of the QA, they, they compliment each other. They're not, um, conflicting.

Fernando Doglio:

Awesome. Okay. Thank you for that. Um, back to the pyramid for every step of that pyramid, uh, do you have like a preferred set of tools?

Lucas da Costa:

So, yeah, so in JavaScript, um, I do have a preferred set of tools. Um, so I know other languages have their own like recommended frameworks and their own recommended ways of doing things. But I'm going to talk about JavaScript, which is what I write the most I I quite like, Jest, so I use Jest a lot. I used to use a Sinon with jest, but I think that recently, uh, with the, what we've done, what they have done with, with timers, replacing the old, uh, mechanism they had to do with faking and controlling time and making things deterministic when it comes to that, uh, they've done some brilliant work there. Uh, Uh, I don't, I don't have to use Sinon that much anymore. I still quite like the API. So sometimes I do, but, um, we are mostly rely on jest and then on, uh, I also use some, some related libraries to do like snapshot testing, you know, so if I'm just trying to, you know, test some CSS or CSS and JS use some particular serializers, um, BI I'm mostly around Jest and, uh, it's ecoosystem. Now, when it comes to writing end-to-end tests, I've used Cypress a lot in the past. And, and I quite liked it, but more recently, I think that PlayWright might be a better choice, you know? Uh, it's more lightweight, uh, easier to set up documentation is great. Uh, you know, support from like different browsers. Love PlayWright. I used that quite a large, uh, and more recently, uh, I've also written a blog post, uh, on elastics blog about, um, synthetic testing. So basically how you can just replace your end-to-end tests with journeys, Written with the syntax package, um, that we have that, that basically allows you to write some tests that run on your machine, run on CI. And then, um, you can. You can have them running as well, continuously on the clouds to, you know, monitor things as things run. Um, I think all of those are good choices, so, so yeah, I think, I think there's, there's plenty of interesting libraries to, to look into.

Fernando Doglio:

Definitely. All right. Uh, we'll have a link for all these on the show notes so people can find them if they don't know about them. Now. The last question on the topic. Can you please talk about a little bit about your book, tell our audience uh what's in it and why they will want to read it?

Lucas da Costa:

Yeah, absolutely. So my book is called "Testing JavaScript Applications". Uh, it's published by Manning and. On it. I talk a lot about the different parts of the pyramid. You know, I give you practical examples of how to test your web application, both writing unit tests, integration tests, and end to end tests. I talk a lot about how it's the pyramid is actually I see it as a spectrum at least. Um, and you know what you need to think about when you're considering. What quality guarantee is you're going to get for the test you're writing. So if I had to summarize my book in one sentence, I'd say it's ROI focused testing is not just, you know, always do this or always do that. I teach you how to make the trade-offs and how you can, you can evaluate whether something should be done. Or shouldn't be done, you know, what are the benefits you get and how do you think about the trade-offs? I think that's, that's a good, a good summary end. You know, I cover many tools on the book. I cover jest. Uh, I talk a little bit about many of the different Destin libraries, the different serializers. Uh, I talk about Cypress on it, as well as snapshot comparison, like a bunch of things. So, uh, TDD as well, um, has an entire chapter to it. So yeah, if you're looking for, for a book that's gonna not only teach you how to write tests. Why you should do it and when to do it, I I'd say that. That's what my book will will teach you.

Fernando Doglio:

Nice. So I highly recommend it then for developers, especially new ones, but also old ones as well, uh, to check it out and, and not only pick up the tools, but also understand the philosophy behind them. So, absolutely. Cool. Let's go through the last quick three questions. I ask all guests so that we can get your insight on your experience. Can you share with us the best advice you ever received?

Lucas da Costa:

The best advice I've ever received? I think that this, this is an advice I got when I was still an intern. So I remember I, uh, at my last day on this, on this company that was doing an internship at my manager was talking to me, you know, about like all the, like all the things we've done and, you know, just doing a retro, a retro or my, or my, all my time there. And one of the advice he's given. Was to not only accept how things work, but to understand how they work. And this wasn't the context that, you know, sometimes you're going to see an example and you're gonna, you know, ride down the same thing as in the example, and you're gonna see working and you're just gonna move. But being curious and trying to understand why that works and digging deeper has helped me a lot. So, you know, going into libraries, I like and reading the source code and just being curious in general, about how things work. I think that's, that's the best advice. Uh, I have received.

Fernando Doglio:

Absolutely. Yeah, that makes sense also. I mean, if you copy paste, make it work, there is no learning there. So I agree. Agree with the manager. Um, what is the most excited project you worked on?

Lucas da Costa:

Most exciting projects? Um, I've, I've seen quite, quite a lot of interesting, uh, projects around, but I'd say that particularly Chai JS was, was super interesting to me. Um, because at that time, I was writing JavaScript for a while, but not, not as much time, you know, and writing a umh an assertion library taught me a lot about how the language works and all the, the quirks and weird things that JavaScript has. Because when you're at your desk testing library, you need to be prepared for many, many weird scenarios, you know, many weird ways of doing things. So you ended up learning a lot of meta programming. You know, you have to think a lot about, you know, what, what you're going to implement in library. And watch, you're not going to implement in the library because at the same time that you want to give users flexibility, you don't want to allow them just like abuse, bad practices. You know, you don't have to, you don't want to have an API, which incentivizes bad practices. So I think ChaiJS was, um, I'm not sure if it was the most interesting because I've worked on plenty of interesting things. But I think it was the most transformative one for my career.

Fernando Doglio:

All right, cool. A lot of people use this libraries that are intrinsic to the way the language works and. We just don't really sit and try to think about the work that goes inside those libraries to get to work the way they do. So it's interesting to have your insight, that last question. What is one thing you wish you knew when you first started coding?

Lucas da Costa:

I think i, I wish I had said "it depends" more often, you know, I think when you. When you start, uh, in this role, you know, I think, uh, many people get into software engineering because they think everything's going to be exact and decisions will be, you know, easy to make because they're just going to look at some numbers and then you're going to, you know, make, always, always make the best decision. But truth is, um, you can make numbers, say whatever you want them to say. And every situation has so many caveats in so many things considered that you need to be comfortable saying. It depends because it always depends on many things. And I think, uh, seniority's very closely related to how often you say it depends and how well you consider what, what things depend on. So I wish I knew that early on, I saw more shades of gray instead of seeing things in black and white. Uh, I'd say, um, yeah.

Fernando Doglio:

Wisdom. Definitely. All right. Thank you. Uh, for your time, Lucas it was really fun and interesting to have you here. Please tell our audience where they can find you and we'll just....

Lucas da Costa:

Yeah, absolutely. It was, it was my pleasure as well, Fernando. Uh, so thanks a lot for the invite again, and yeah. If people want to find me, they can go to my website. That's Lucas F for Foxtrot costa.com. So lucasfcosta.com. You can find my GitHub there, which is LucasFCosta and you can find me on Twitter at thewizardlucas, all those links are on the website. And you know, if you want to read my musings about software in, uh, uh, my thoughts on how to solve problems. Yeah. My website is, is the place to go.

Fernando Doglio:

Awesome. All right, thanks again. Uh, that's it folks catch you on the next one.

Who is Lucas?
What is testing?
The testing Pyramid
On TDD
What is 100% coverage and should you aim for it?
On trusing your tests
Should developers test their own features?
Thoughts on the QA professional and their role
Preferred JavaScript testing tools
About "Testing JavaScript applications"
What's the best advice ever received?
His most exciting project
One thing he regrets not knowing before starting to code
Where can people find him?