SciForce Blog

Read our blog and carry on -NLP

Learn who we are and why we stand out among the others.

View all cases
iconFollow Sciforce on Medium
TOP-10 AI News of January by a CTO of Sciforce — Max VedHigh-Quality Software: to Pay or Not to Pay

Software development companies are always under pressure to launch their software onto the market faster, as releasing ahead of the competition gives advantage which can be vital. Fast release times and more frequent releases can, at the same time, corrupt the quality of the product, increasing the chances of defects and bugs. It is quite a common debate in software development projects to choose between spending time on improving software quality versus releasing more valuable features faster. The pressure to deliver functionality often cuts off time that can be dedicated to working on architecture and code quality. However, reality shows that high-performing IT companies can release fast (Amazon, for example, unfolds new software for production through its Apollo deployment service every 11.7 seconds) with 60 times fewer failures. So, do we actually need to choose between quality, time, and price? Software quality refers to many things. It measures whether the software satisfies its functional and non-functional requirements: Functional requirements specify what the software should do, including technical details, data manipulation, and processing, or any other specific function. Non-functional requirements, or quality attributes, include things like disaster recovery, portability, privacy, security, supportability, and usability. To understand the software quality, we can explore the CISQ software quality model that outlines all quality aspects and relevant factors to get a holistic view of software quality. It rests on four important indicators of software quality: Reliability – the risk of software failure and the stability of a program when exposed to unexpected conditions. Quality software should have minimal downtime, good data integrity, and no errors that directly affect users. Performance efficiency – an application’s use of resources and how it affects the scalability, customer satisfaction, and response time. It rests on the software architecture, source code design, and individual architectural components. Security – protection of information against the risk of software breaches that relies on coding and architectural strength. Maintainability – the amount of effort needed to adjust software, adapt it for other goals or hand it over from one development team to another. The key principles here are compliance with software architectural rules and consistent coding across the application. Of course, there are other factors that ensure software quality and provide a more holistic view of quality and the development process. Rate of Delivery – how often new versions of the software are shipped to customers. Testability – finding faults in software with high testability is easier, making such systems less likely to contain errors when shipped to end-users. Usability – the user interface is the single part of the software visible to users, so it’s crucial to have a great UI. Simplicity and task execution speed are two factors that facilitate better UI. User sentiment – measuring how end-users feel when interacting with an application or system helps companies get to know them better and incorporate their needs into upcoming sprints and ultimately broaden your impact and market presence. Continuous improvement – implementing the practice of constant improvement processes is central to quality management. It can help your team develop its own best practices and share them further, justify investments, and increase self-organization. There are obviously a lot of aspects that describe quality software, however, not all of them are evident to the end-user: a user can tell if the user-interface is good, an executive can assess if the software is making the staff more efficient. Most probably, users will notice defects or certain bugs and inconsistencies. What they do not see is the architecture of the software. Software quality can thus fall into two major categories: external* (such as the UI and defects) and *internal (architecture): a user can see what makes up the high external quality of a software product, but cannot tell the difference between higher or lower internal quality. Therefore, a user can judge whether to pay more to get a better user interface, since they can assess what they get. But users do not see the internal modular structure of the software, let alone judge that it's better, so they might be reluctant to pay for something that they neither see, nor understand. And why should any software-developing company put time and effort into improving the internal quality of their product if it has no direct effect? When users do not see or appreciate extra efforts spent on the product architecture, and the demand for software delivery speed continues to increase along with the demand for reduction in costs, companies are tempted to release more new features that would show progress to their customers. However, it is a trap that reduces the initial time spent and the cost of the software but makes it more expensive to modify and upgrade in the long run. One of the principal features of internal quality is making it easier to figure out how the application works so developers can add things easily. For example, if the software is divided into separate modules, you can read not the whole bunch of code, but look through a few hundred lines in a couple of modules to find the necessary information. More robust architecture – and therefore, better internal quality, will make adding new features easier, which means faster and cheaper. Besides, software's customers have only a rough idea of what features they need in a product and learn gradually as the software is built - particularly after the early versions are released to their users. It entails constant changing of the software, including languages, libraries, and even platforms. With poor internal quality, even small changes require developers to understand large areas of code, which in turn is quite tough to understand. When they perform changes, unexpected breakages happen, leading to long test times and defects that need to be fixed.Therefore, concentrating only on external quality will yield fast initial progress, but as time goes on, it gets harder to add new features. High internal quality means reducing that drop off in productivity. But how can you achieve high external and internal quality when you don't have endless time and resources? Following the build life cycle from story to code on a developer desktop could be an answer.While performing testing, use automation through the process, including automated, functional, security, and other modes of testing. This provides teams with quality metrics and automated pass/fail rates. When your most frequent tests are fully automated and only manual tests on the highest quality releases left, it leads to the automated build-life quality metrics that cover the full life cycle. It is enabling developers to deliver high-quality software quickly and reduce costs through higher efficiency. Neglecting internal quality leads to rapid build-up of work that eventually slows down new feature development. It is important to keep internal quality high in the light of having control, which will pay off when adding features and changing the product. Therefore, to answer the question in the title, it is actually a myth that high-quality software is more expensive, so no such trade-off exists. And you definitely should spend more time and effort on building robust architecture to have a good basis for further development – unless you are just working on a college assignment that you’ll forget in a month.

Top-5 NLP news of December by a CTO of Sciforce — Max Ved

We are excited to announce that we are launching a new section today at Sciforce – “Top AI news of the month.” We decided to create this column to keep our valued clients aware of the latest news and technologies in the AI world. So, here, we will share some interesting information with you about SciTech! Well, today we will discuss Top-5 NLP news in December:

WHAT’S NEXT FOR GENERATIVE ADVERSARIAL NETWORKS (GANS): LATEST TECHNIQUES AND APPLICATIONSWhat’s Next for Generative Adversarial Networks (GANs): Latest Techniques and Applications

Generative Adversarial Networks (GANs) are constantly improving year over the year. In October 2021, NVIDIA presented a new model, StyleGAN3, that outperforms StyleGAN2 with its hierarchical refinement. The new model resolves “sticking issues” of StyleGAN2 and learns to mimic camera motion. Moreover, StyleGAN3 promises to improve the models for video and animation generation. That’s impressive progress, compared to 2014 when GANs entered the picture with low-resolution images. We are also witnessing applications beyond simple images generation. They include but are not limited to: medical products, training data, scientific simulation development, improvements for augmented reality (AR) experience, and speech enhancement and generation. Let’s delve into the most impressive applications we’ve got so far! Lots of articles illustrate the GANs abilities when it comes to image generation and editing. You’ve probably read about functions like face aging, text-to-image translation, frontal face view or pose generation, and so on. As a starter, we recommend 18 Impressive Applications of Generative Adversarial Networks (GANs) and Best Resources for Getting Started With GANs by Jason Brownlee. In this article, we’d like to delve into the latest applications of GANs in real life to go deeper. Labeling medical datasets is expensive and time-consuming, but it seems like GANs have something to offer. Since GANs predominantly belong to the data augmentation techniques, we'd like to dwell on the latest updates in healthcare. Data augmentation is about GANs helping computer vision professionals struggling with class imbalance, leading to biased models while training datasets. Data augmentation, ensured by GANs, helps fight overfitting and the inability to generalize novel examples. So this is how GANs are increasing performance for underrepresented classes of chest X-ray classification, as per the research of Sundaram et al. in 2021. They proved that GANs-based data augmentation is more efficient than standard data augmentation. Meanwhile, researchers point out that GAN data augmentation was most effective when applied to small, significantly imbalanced datasets. Also, it has a limited impact on large datasets. Also, researchers from the University of New Hampshire, in the US, demonstrated that GANs-based data augmentation is beneficial for neuroimaging. Functional near-infrared spectroscopy (fNIRS) belongs to the neuroimaging techniques for mapping the functioning human cortex. By the way, fNIRS applies to brain-computer interfaces, so a large amount of new data for deep learning classification training is crucial. Conditional Generative Adversarial Networks (CGAN), combined with a CNN classifier, led to the 96.67% task classification accuracy, as per the research of Sajila D. Wickramaratne and Shaad Mahmud in 2021. Researchers from the University of California, Berkeley and Glidewell Dental Labs presented one of the first real applications for medical product development. With the help of generative models, dental crowns can be designed to reach the same morphology quality as dental experts do. It takes years of training to develop synthetic crowns for a professional in the dental industry. Thus, it paves the way for the mass customization of products in the healthcare industry. At the same time, GANs come as a good fit for super-resolution medical imaging like low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI). GANs-based method, proposed in 2020, Medical Images SR using Generative Adversarial Networks (MedSRGAN) increase radiologists' efficiency. Thus, it helps to increase the quality of scans and avoid harmful effects this procedure may bring. Automatic speech recognition (ASR) is one of the areas of our expertise. Speech enhancement GANs (SEGAN) apply to the noisy inputs to refine them and make a qualitative output. This function is crucial for people with speech impairments, for example. Thus, GANs could enhance their quality of life. Recently, Huy Phan et al. proposed using “multiple generators that are chained to perform a multi-stage enhancement.” As researchers state, new models, ISEGAN and DSEGAN, are performing better than SEGAN. GANs also come in handy for augmented reality (AR) scenes with creative generation capabilities. For example, recent use cases include completing environmental maps with lightning, reflections, and shadows. Thus, ARShadowGAN, presented in 2020 by Daquan Liu et al., generates shadows of the virtual objects in single light scenes. This technology bridges the real-world environment and the virtual object’s shadow without 3D geometric details or any explicit estimation of illumination: When it comes to advertising, the phrase “time is money” means a lot. One of our use cases implied automated advertising images generation at scale. For example, it costs time and money for a designer to resize images for marketing campaigns from socials to Amazon platforms. However, Super-Resolution Using a Generative Adversarial Network (SRGAN) for single image super-resolution can deal with it. As a result, using SRGAN, it’s possible to resize qualitative images without any human interaction with design. “What could be better than data for a data scientist? More data!” This joke became a real application thanks to GANs. As any neural network-based model is hungry for training data, generative models that could create labeled training data on demand could become a game-changer. For instance, Zhenghao Fei et al. (University of California, Davis) demonstrated how semantically constrained GAN (CycleGAN plus task constrained network) can eliminate the labor-, cost-, and time-consuming process of data labeling. Thus, it ensures more data-efficient and generalizable fruit detection. Simply put, semantically constrained GAN can generate realistic day and night images of grapevine from 3D rendering images and retain grape position and size simultaneously. Labeled data generation could be beneficial in the NLP domain — supporting the research of low resource languages. For instance, Sangramsing Kayte used GANs for text-to-speech translation of low-resource languages of the Indian subcontinent. Recent research shows that ML models can leak sensitive information provided by the training samples. For example, the paper This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated (2021) illustrates that many images of faces produced by GANs strongly resemble the real faces taken from the training data. Researchers propose differential privacy that could help networks learn the data distribution while securing the training data’s privacy. GANs demonstrated impressive progress, compared to 2014, when introduced first by Ian Goodfellow. Despite still being in its infancy, we are already witnessing how GANs improve the design of medical products, automate image editing for advertising, and merge with AR technology. At the same time, the privacy of data generated remains topical, and differential privacy is one of the techniques to consider.


Our contacts