Learn who we are and why we stand out among the others.
TOP-10 AI News of January Last month, at Sciforce, we started a new section in the blog – “Top AI news of the month,” and showed you Top-5 NLP news of December, where our CTO – Max Ved, has selected the most interesting news in the NLP sector and gave his opinion on them. And today, we are excited to present 10 great news of January in the AI field! Look what we have prepared for you: 1) Boston Dynamics released the videodemo of Atlas– a humanoid robot. The robot can run, jump, grab and throw different objects. Atlas has a claw-like gripper designed for heavy lifting tasks and consists of one fixed finger and one moving finger. Honestly speaking, there is not much AI in these robots but we are constantly looking for BD’s robots progress and are excited to see new tricks and new applications. We believe that, in many cases, robots can increase efficiency and quality of work, become great assistants for humans, and simplify our lives. 2) GitHub Code Brushes uses ML to update code ‘like painting with Photoshop: Recently, GitHub Next announced a Code Brushes project which uses machine learning to improve code "like painting with Photoshop." With this tool, developers can "brush" over their code to see it update in real-time. For example, there are different brushes that can help you reach different effects: making the code more readable, debugging code, improving compatibility, etc. Here you can see the examples of before and after using debugging brush: Before: After: 3) Nick Cave has written a review on the song, written by ChatGPT in his style. Singer said that "It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque." Our CTO, Max Ved jokes that the goal of AI is not to replace good artists but rather eliminate bad ones! Actually, we all need to understand that the model can be imperfect right now, and there can be some inaccurate results in tasks that are more complex. But, we are absolutely sure that such a technology is a breakthrough in the neural network field. 4) Technavio has released the forecast and analysis for 2023-2027 about Artificial Intelligence based Personalization Market by Application, Technology, and Geography. The study reveals key drivers, trends, challenges, customer landscape, segment overview, and many other information related to AI. Our CTO states that usually, it is quite hard to accurately predict many things, but, in the case of AI market growth, trends, etc., this technology can be even underrated in the surveys. 5) The scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory state that AI discovered three new nanostructures. This vivid example once again proves that AI can be widely used not only to fulfill the basic needs of humanity and perform simple tasks but is also undoubtedly useful for science. 6) AI can create short-form videos for YouTube, Instagram, TikTok, and Snapchat, given just a single word about the video topic! How does it work? Based on the request of the user, the app QuickVid chooses a background video from a library, writes a script and keywords, then, the app overlays images generated by DALL-E 2 and adds background music from YouTube's royalty-free music library and synthetic voiceover. QuickVid's creator, Daniel Habib, says that he's building the service to help creators meet the "ever-growing" demand from their fans. Well, we think that such a tool can be a great assistant for content creators. But, for now, the samples of the content we see have very similar formats. Moreover, it is quite interesting how such content will comply with copyright legislation. 7) In Hong Kong, fashion designers have demonstrated new collections influenced by new AI assistant, AiDA, short for "AI-based Interactive Design Assistant." At the fashion show, more than 80 outfits were presented by 14 designers. CEO of AiDLab, Calvin Wong, states that the software was created to serve as a "supporting tool" for designers. Max Ved thinks that using AI in fashion can revolutionize the whole industry and become a helpful tool for designers that can save valuable time and resources. 8) China’s deepfake laws came into force on January 10. The key provisions of using this technology are the following: the real identity of users should be identified, deepfake cannot be used for dissemination of fake news, there is needed the consent of a person whose picture is used, the content must have a notification to inform users that the image or video has been altered with technology. We believe that such legislation will be a great mechanism to prevent arbitrariness. 9) Artificial intelligence can help prevent illegal construction. For this project, drones were used that fly through the city and take photographs of buildings. Then, the data can be sent to a computing platform that determines what class of facade a particular building belongs to. 10) DLR launches innovative robot control software. The company claims that they eliminate the need for programming robots through AI that combines deep learning, computer vision, and motion control algorithms to perceive the environment as a 3-D map in real time. Basically, their software allows users to teach robots tasks by demonstrating to the robot the job to do. It is the most simple and intuitive way possible! We all know how hard it can be to teach the robot to perform the tasks. But, this method allows for simplifying this process so that everyone can work with robots. Check out our December news!
Software development companies are always under pressure to launch their software onto the market faster, as releasing ahead of the competition gives advantage which can be vital. Fast release times and more frequent releases can, at the same time, corrupt the quality of the product, increasing the chances of defects and bugs. It is quite a common debate in software development projects to choose between spending time on improving software quality versus releasing more valuable features faster. The pressure to deliver functionality often cuts off time that can be dedicated to working on architecture and code quality. However, reality shows that high-performing IT companies can release fast (Amazon, for example, unfolds new software for production through its Apollo deployment service every 11.7 seconds) with 60 times fewer failures. So, do we actually need to choose between quality, time, and price? Software quality refers to many things. It measures whether the software satisfies its functional and non-functional requirements: Functional requirements specify what the software should do, including technical details, data manipulation, and processing, or any other specific function. Non-functional requirements, or quality attributes, include things like disaster recovery, portability, privacy, security, supportability, and usability. To understand the software quality, we can explore the CISQ software quality model that outlines all quality aspects and relevant factors to get a holistic view of software quality. It rests on four important indicators of software quality: Reliability – the risk of software failure and the stability of a program when exposed to unexpected conditions. Quality software should have minimal downtime, good data integrity, and no errors that directly affect users. Performance efficiency – an application’s use of resources and how it affects the scalability, customer satisfaction, and response time. It rests on the software architecture, source code design, and individual architectural components. Security – protection of information against the risk of software breaches that relies on coding and architectural strength. Maintainability – the amount of effort needed to adjust software, adapt it for other goals or hand it over from one development team to another. The key principles here are compliance with software architectural rules and consistent coding across the application. Of course, there are other factors that ensure software quality and provide a more holistic view of quality and the development process. Rate of Delivery – how often new versions of the software are shipped to customers. Testability – finding faults in software with high testability is easier, making such systems less likely to contain errors when shipped to end-users. Usability – the user interface is the single part of the software visible to users, so it’s crucial to have a great UI. Simplicity and task execution speed are two factors that facilitate better UI. User sentiment – measuring how end-users feel when interacting with an application or system helps companies get to know them better and incorporate their needs into upcoming sprints and ultimately broaden your impact and market presence. Continuous improvement – implementing the practice of constant improvement processes is central to quality management. It can help your team develop its own best practices and share them further, justify investments, and increase self-organization. There are obviously a lot of aspects that describe quality software, however, not all of them are evident to the end-user: a user can tell if the user-interface is good, an executive can assess if the software is making the staff more efficient. Most probably, users will notice defects or certain bugs and inconsistencies. What they do not see is the architecture of the software. Software quality can thus fall into two major categories: external* (such as the UI and defects) and *internal (architecture): a user can see what makes up the high external quality of a software product, but cannot tell the difference between higher or lower internal quality. Therefore, a user can judge whether to pay more to get a better user interface, since they can assess what they get. But users do not see the internal modular structure of the software, let alone judge that it's better, so they might be reluctant to pay for something that they neither see, nor understand. And why should any software-developing company put time and effort into improving the internal quality of their product if it has no direct effect? When users do not see or appreciate extra efforts spent on the product architecture, and the demand for software delivery speed continues to increase along with the demand for reduction in costs, companies are tempted to release more new features that would show progress to their customers. However, it is a trap that reduces the initial time spent and the cost of the software but makes it more expensive to modify and upgrade in the long run. One of the principal features of internal quality is making it easier to figure out how the application works so developers can add things easily. For example, if the software is divided into separate modules, you can read not the whole bunch of code, but look through a few hundred lines in a couple of modules to find the necessary information. More robust architecture – and therefore, better internal quality, will make adding new features easier, which means faster and cheaper. Besides, software's customers have only a rough idea of what features they need in a product and learn gradually as the software is built - particularly after the early versions are released to their users. It entails constant changing of the software, including languages, libraries, and even platforms. With poor internal quality, even small changes require developers to understand large areas of code, which in turn is quite tough to understand. When they perform changes, unexpected breakages happen, leading to long test times and defects that need to be fixed.Therefore, concentrating only on external quality will yield fast initial progress, but as time goes on, it gets harder to add new features. High internal quality means reducing that drop off in productivity. But how can you achieve high external and internal quality when you don't have endless time and resources? Following the build life cycle from story to code on a developer desktop could be an answer.While performing testing, use automation through the process, including automated, functional, security, and other modes of testing. This provides teams with quality metrics and automated pass/fail rates. When your most frequent tests are fully automated and only manual tests on the highest quality releases left, it leads to the automated build-life quality metrics that cover the full life cycle. It is enabling developers to deliver high-quality software quickly and reduce costs through higher efficiency. Neglecting internal quality leads to rapid build-up of work that eventually slows down new feature development. It is important to keep internal quality high in the light of having control, which will pay off when adding features and changing the product. Therefore, to answer the question in the title, it is actually a myth that high-quality software is more expensive, so no such trade-off exists. And you definitely should spend more time and effort on building robust architecture to have a good basis for further development – unless you are just working on a college assignment that you’ll forget in a month.
We are excited to announce that we are launching a new section today at Sciforce – “Top AI news of the month.” We decided to create this column to keep our valued clients aware of the latest news and technologies in the AI world. So, here, we will share some interesting information with you about SciTech! Well, today we will discuss Top-5 NLP news in December:
Generative Adversarial Networks (GANs) are constantly improving year over the year. In October 2021, NVIDIA presented a new model, StyleGAN3, that outperforms StyleGAN2 with its hierarchical refinement. The new model resolves “sticking issues” of StyleGAN2 and learns to mimic camera motion. Moreover, StyleGAN3 promises to improve the models for video and animation generation. That’s impressive progress, compared to 2014 when GANs entered the picture with low-resolution images. We are also witnessing applications beyond simple images generation. They include but are not limited to: medical products, training data, scientific simulation development, improvements for augmented reality (AR) experience, and speech enhancement and generation. Let’s delve into the most impressive applications we’ve got so far! Lots of articles illustrate the GANs abilities when it comes to image generation and editing. You’ve probably read about functions like face aging, text-to-image translation, frontal face view or pose generation, and so on. As a starter, we recommend 18 Impressive Applications of Generative Adversarial Networks (GANs) and Best Resources for Getting Started With GANs by Jason Brownlee. In this article, we’d like to delve into the latest applications of GANs in real life to go deeper. Labeling medical datasets is expensive and time-consuming, but it seems like GANs have something to offer. Since GANs predominantly belong to the data augmentation techniques, we'd like to dwell on the latest updates in healthcare. Data augmentation is about GANs helping computer vision professionals struggling with class imbalance, leading to biased models while training datasets. Data augmentation, ensured by GANs, helps fight overfitting and the inability to generalize novel examples. So this is how GANs are increasing performance for underrepresented classes of chest X-ray classification, as per the research of Sundaram et al. in 2021. They proved that GANs-based data augmentation is more efficient than standard data augmentation. Meanwhile, researchers point out that GAN data augmentation was most effective when applied to small, significantly imbalanced datasets. Also, it has a limited impact on large datasets. Also, researchers from the University of New Hampshire, in the US, demonstrated that GANs-based data augmentation is beneficial for neuroimaging. Functional near-infrared spectroscopy (fNIRS) belongs to the neuroimaging techniques for mapping the functioning human cortex. By the way, fNIRS applies to brain-computer interfaces, so a large amount of new data for deep learning classification training is crucial. Conditional Generative Adversarial Networks (CGAN), combined with a CNN classifier, led to the 96.67% task classification accuracy, as per the research of Sajila D. Wickramaratne and Shaad Mahmud in 2021. Researchers from the University of California, Berkeley and Glidewell Dental Labs presented one of the first real applications for medical product development. With the help of generative models, dental crowns can be designed to reach the same morphology quality as dental experts do. It takes years of training to develop synthetic crowns for a professional in the dental industry. Thus, it paves the way for the mass customization of products in the healthcare industry. At the same time, GANs come as a good fit for super-resolution medical imaging like low dose Computer Tomography (CT), low field magnetic resonance imaging (MRI). GANs-based method, proposed in 2020, Medical Images SR using Generative Adversarial Networks (MedSRGAN) increase radiologists' efficiency. Thus, it helps to increase the quality of scans and avoid harmful effects this procedure may bring. Automatic speech recognition (ASR) is one of the areas of our expertise. Speech enhancement GANs (SEGAN) apply to the noisy inputs to refine them and make a qualitative output. This function is crucial for people with speech impairments, for example. Thus, GANs could enhance their quality of life. Recently, Huy Phan et al. proposed using “multiple generators that are chained to perform a multi-stage enhancement.” As researchers state, new models, ISEGAN and DSEGAN, are performing better than SEGAN. GANs also come in handy for augmented reality (AR) scenes with creative generation capabilities. For example, recent use cases include completing environmental maps with lightning, reflections, and shadows. Thus, ARShadowGAN, presented in 2020 by Daquan Liu et al., generates shadows of the virtual objects in single light scenes. This technology bridges the real-world environment and the virtual object’s shadow without 3D geometric details or any explicit estimation of illumination: When it comes to advertising, the phrase “time is money” means a lot. One of our use cases implied automated advertising images generation at scale. For example, it costs time and money for a designer to resize images for marketing campaigns from socials to Amazon platforms. However, Super-Resolution Using a Generative Adversarial Network (SRGAN) for single image super-resolution can deal with it. As a result, using SRGAN, it’s possible to resize qualitative images without any human interaction with design. “What could be better than data for a data scientist? More data!” This joke became a real application thanks to GANs. As any neural network-based model is hungry for training data, generative models that could create labeled training data on demand could become a game-changer. For instance, Zhenghao Fei et al. (University of California, Davis) demonstrated how semantically constrained GAN (CycleGAN plus task constrained network) can eliminate the labor-, cost-, and time-consuming process of data labeling. Thus, it ensures more data-efficient and generalizable fruit detection. Simply put, semantically constrained GAN can generate realistic day and night images of grapevine from 3D rendering images and retain grape position and size simultaneously. Labeled data generation could be beneficial in the NLP domain — supporting the research of low resource languages. For instance, Sangramsing Kayte used GANs for text-to-speech translation of low-resource languages of the Indian subcontinent. Recent research shows that ML models can leak sensitive information provided by the training samples. For example, the paper This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated (2021) illustrates that many images of faces produced by GANs strongly resemble the real faces taken from the training data. Researchers propose differential privacy that could help networks learn the data distribution while securing the training data’s privacy. GANs demonstrated impressive progress, compared to 2014, when introduced first by Ian Goodfellow. Despite still being in its infancy, we are already witnessing how GANs improve the design of medical products, automate image editing for advertising, and merge with AR technology. At the same time, the privacy of data generated remains topical, and differential privacy is one of the techniques to consider.