התחברות או הרשמה
להמשך הרשמה ידנית – לחץ על כפתור ההרשמה, להרשמה/כניסה מהירה בעזרת חשבון רשת חברתית – לחץ על הלוגו בכותרת

אפס סיסמה - שכחתי את שם המשתמש

שם משתמש
זכור אותי

he icon   en icon

בכדי לכתוב בפורום יש להרשם או להתחבר - ההרשמה/כניסה מתבצעת מכותרת האתר.  

ברוך הבא, אורח
שם משתמש: סיסמה: זכור אותי

חיפוש מתקדם

חיפוש לפי מילות מפתח
חיפוש לפי משתמש

אפשרויות חיפוש

חיפוש פוסטים לפי
הצג תוצאות לפי
קפוץ לתוצאת חיפוש
חיפוש בקטגוריות

חדשות מעולם הבדיקות

  • Mutation Testing

    Mutation Testing By Goran PetrovicHistory It’s been a long-standing tradition of my team to organize hackathons twice a year. In weeks prior to the hackathon, the team gathers and brainstorms ideas for projects, ranging from improving the testing infrastructure or an existing process, to trying out a wild idea they’ve had for some time. Just before the hackathon, the team rates the accumulated ideas on a coolness-impact scale: how much fun does a project sound vs. how impactful could it potentially be; while impact is important, for hackathons, fun is non-negotiable. Then, engineers who are excited to work on some of the proposed projects subscribe and form teams. It was no different in the cold winter of 2013, where among the plethora of cool and wild ideas, one was to prototype Mutation testing.For those who are not familiar with it, mutation testing is a method of evaluating test quality by injecting bugs into the code and seeing whether the tests detect the fault or not. The more injected bugs the tests catch, the better they are. Here’s an example:Negating the if condition. def checkout(cart): if not cart.items: throw Error("cart empty") return checkout_internal(cart) def checkout(cart): if cart.items: throw Error("cart empty") return checkout_internal(cart) If a test fails, we say it kills the mutant, and if no tests fail, we say that the mutant is alive.By the end of the hackathon, mutagenesis was implemented for C++ and Python, and a prototype was born: a shell script that evaluates generated mutants in a diff (pull request)[…]

    12.04.2021 | 10:16 קרא עוד...
  • API Testing Challenge 12 - How To - DELETE todos/id 200

    This post and video shows how to complete the challenge DELETE /todos/id (200) to successfully delete a todo item in the application. What are the API Challenges? Our API Challenges Application has a fully functional cloud hosted API, and a set of challenges to work through. DELETE /todos/id (200) Issue a DELETE request to successfully delete a todo DELETE request will delete a todo if the provided id exists /todos/id end point e.g. DELETE /todos/3 to delete the todo with id==3 200 is an success code, in this case it means the todo was deleted The body of the message is empty add the X-CHALLENGER header Basic Instructions Issue a DELETE request to end point “/todos/id” where id is replaced with the id of an existing todo if you don’t know any then a GET /todos would show a list of todos, or you could POST /todos to create one. if running locally that endpoint would be http://localhost:4567/todos/id if running in the cloud that endpoint would be https://apichallenges.herokuapp.com/todos/id The request should have an X-CHALLENGER header to track challenge completion The response status code should be 200 when all the details are valid and the todo exists. To double check that the todo item was deleted, then you could issue a GET request on the todo directly and receive a 404 or issue a GET request on /todos and check it is not in the list of todos. Insomnia Details > DELETE /todos/62 HTTP/1.1 > Host: apichallenges.herokuapp.com > User-Agent: insomnia/2020.3.3 >[…]

    12.04.2021 | 4:30 קרא עוד...
  • 2021/04/12 My breakdown of “Mike Monteiro: 13 Ways Designers Screw Up Client Presentations” (and how it applies to testers)

    2021/04/12 My breakdown of “Mike Monteiro: 13 Ways Designers Screw Up Client Presentations” (and how it applies to testers) Today I’m going to be talking about a presentation I found on YouTube, titled “13 Ways Designers Screw Up Client Presentations”, by Mike Monteiro (@Monteiro on Twitter). It is aimed at designers, who he classifies as “anyone who puts anything on the web”. So why, as a tester, who doesn’t put content on the web, not only watch it, but blog about it? I had seen a previous video of his – Fuck You, Pay Me, and found him to be an interesting speaker, so I thought give it a shot. I’m glad I did, as a good chunk of what he says could be applied to testers (or developers, or many other roles which require expert knowledge). If you want to watch the whole thing yourself (and I would encourage you to do so), you can watch it below. It is just under an hour long, which isn’t always easy to spare. If you would like a breakdown of his presentation, with me referencing sections that I feel would apply to testers, then keep on reading. Introduction Quality is not enough Mike asks do good thing comes to those who wait, that quality rises, that the making was enough, and our place is in front of a screen. Does good design (testing) sell itself?No Mike says that for collaborative environments, beer on tap, a ping pong table, for all of that, there is one thing that will hold true. If you stop paying people they stop coming to work[…]

    12.04.2021 | 2:00 קרא עוד...


  • טיפ - עבודת בודקים בצמד עם המפתח
    טיפ - עבודת בודקים בצמד עם המפתח  עבודת בודקים בצמד עם המפתח "עבודה בצמד עם המפתח" – לעבודה בצמדים יתרונות רבים אך לעיתים היא נזנחת בשל "העלות הכפולה". בשנים האחרונות עם עליית שיטות אג'יליות ו- Extreme Programming צורת עבודה זו יותר נפוצה. כששני…
    קרא עוד...
  • אל תגיד "אוטומציה זה למומחים באוטומציה"
    אל תגיד "אוטומציה זה למומחים באוטומציה" אל תגיד "אוטומציה זה למומחים באוטומציה", הכל מתחיל בך! - שאל את עצמך על אילו פעולות אתה חוזר יותר מ-5 פעמים ביום? וכיצד תביא לכך שלא תצטרך לעשות זאת שוב?   טיפים מחברי ITCB-AB
    קרא עוד...
לרשימה המלאה >>