Publisher: Elsevier, Data in Brief  Link>

ABSTRACT

The COVID-19 pandemic has underlined the need for reliable information for clinical decision-making and public health policies. As such, evidence-based medicine (EBM) is essential in identifying and evaluating scientific documents pertinent to novel diseases, and the accurate classification of biomedical text is integral to this process. Given this context, we introduce a comprehensive, curated dataset composed of COVID-19-related documents.

This dataset includes 20,047 labeled documents that were meticulously classified into five distinct categories: systematic reviews (SR), primary study randomized controlled trials (PS-RCT), primary study non-randomized controlled trials (PS-NRCT), broad synthesis (BS), and excluded (EXC). The documents, labeled by collaborators from the Epistemonikos Foundation, incorporate information such as document type, title, abstract, and metadata, including PubMed id, authors, journal, and publication date.

Uniquely, this dataset has been curated by the Epistemonikos Foundation and is not readily accessible through conventional web-scraping methods, thereby attesting to its distinctive value in this field of research. In addition to this, the dataset also includes a vast evidence repository comprising 427,870 non-COVID-19 documents, also categorized into SR, PS-RCT, PS-NRCT, BS, and EXC. This additional collection can serve as a valuable benchmark for subsequent research. The comprehensive nature of this open-access dataset and its accompanying resources is poised to significantly advance evidence-based medicine and facilitate further research in the domain.


Publisher: Multimedia Tools and Applications, Link>

ABSTRACT

This paper proposes a novel online self-learning detection system for different types of objects. It allows users to random select detection target, generating an initial detection model by selecting a small piece of image sample and continue training the detection model automatically. The proposed framework is divided into two parts: First, the initial detection model and the online reinforcement learning. The detection model is based on the proportion of users of the Haar-like features to generate feature pool, which is used to train classifiers and get positive-negative (PN) classifier model. Second, as the videos plays, the detecting model detects the new sample by Nearest Neighbor (NN) Classifier to get the PN similarity for new model. Online reinforcement learning is used to continuously update classifier, PN model and new classifier. The experiment shows the result of less detection sample with automatic online reinforcement learning is satisfactory.


Publisher: Computers and Electronics in Agriculture, Link>

ABSTRACT

Decision support systems have become increasingly popular in the domain of agriculture. With the development of automated machine learning, agricultural experts are now able to train, evaluate and make predictions using cutting edge machine learning (ML) models without the need for much ML knowledge. Although this automated approach has led to successful results in many scenarios, in certain cases (e.g., when few labeled datasets are available) choosing among different models with similar performance metrics is a difficult task. Furthermore, these systems do not commonly allow users to incorporate their domain knowledge that could facilitate the task of model selection, and to gain insight into the prediction system for eventual decision making. To address these issues, in this paper we present AHMoSe, a visual support system that allows domain experts to better understand, diagnose and compare different regression models, primarily by enriching model-agnostic explanations with domain knowledge. To validate AHMoSe, we describe a use case scenario in the viticulture domain, grape quality prediction, where the system enables users to diagnose and select prediction models that perform better. We also discuss feedback concerning the design of the tool from both ML and viticulture experts.


Publisher: Machine Vision and Applications, Link>

ABSTRACT

In the automotive industry, light-alloy aluminum castings are an important element for determining roadworthiness. X-ray testing with computer vision is used during automated inspections of aluminum castings to identify defects inside of the test object that are not visible to the naked eye. In this article, we evaluate eight state-of-the-art deep object detection methods (based on YOLO, RetinaNet, and EfficientDet) that are used to detect aluminum casting defects. We propose a training strategy that uses a low number of defect-free X-ray images of castings with superimposition of simulated defects (avoiding manual annotations). The proposed solution is simple, effective, and fast. In our experiments, the YOLOv5s object detector was trained in just 2.5 h, and the performance achieved on the testing dataset (with only real defects) was very high (average precision was 0.90 and the F1 factor was 0.91). This method can process 90 X-ray images per second, i.e. ,this solution can be used to help human operators conduct real-time inspections. The code and datasets used in this paper have been uploaded to a public repository for future studies. It is clear that deep learning-based methods will be used more by the aluminum castings industry in the coming years due to their high level of effectiveness. This paper offers an academic contribution to such efforts.


Publisher:, Link>

ABSTRACT

In this chapter, relevant applications on X-ray testing are described. We cover X-ray testing in (i) castings, (ii) welds, (iii) baggage, (iv) natural products, and (v) others (like cargos and electronic circuits). For each application, the state of the art is presented. Approaches in each application are summarized showing how they use computer vision techniques. A detailed approach is shown in each application and some examples using Python are given in order to illustrate the performance of the methods.


Publisher: Revista Bits de Ciencia, Link>

ABSTRACT

Corría el año 2010 y yo cursaba mi doctorado enfocado en personalización y sistemas de recomendación en la Universidad de Pittsburgh, ubicada en la ciudad homónima (Pittsburgh) al oeste del estado de Pennsylvania en Estados Unidos. Las técnicas más avanzadas de mi tema de investigación eran del área conocida como Aprendizaje Automático (en inglés, Machine Learning), por lo que sentía la necesidad de tomar un curso avanzado para completar mi formación. En el semestre de otoño finalmente me inscribí en el curso de Aprendizaje Automático, y gracias a un convenio académico pude cursarlo en la universidad vecina, Carnegie Mellon University. Yo estaba realmente emocionado de tomar un curso en un tema de tan creciente relevancia en unas de las mejores universidades del mundo en el área de computación.


Publisher: arXiv, Link>

ABSTRACT:

Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level. However, there has been limited progress in generating useful discourse-level representations. In this work, we propose to use ideas from predictive coding theory to augment BERT-style language models with a mechanism that allows them to learn suitable discourse-level representations. As a result, our proposed approach is able to predict future sentences using explicit top-down connections that operate at the intermediate layers of the network. By experimenting with benchmarks designed to evaluate discourse-related knowledge using pre-trained sentence representations, we demonstrate that our approach improves performance in 6 out of 11 tasks by excelling in discourse relationship detection.


Publisher:, Link>

ABSTRACT

With the recent surge in threats to public safety, the security focus of several organizations has been moved towards enhanced intelligent screening systems. Conventional X-ray screening, which relies on the human operator is the best use of this technology, allowing for the more accurate identification of potential threats. This paper explores X-ray security imagery by introducing a novel approach that generates realistic synthesized data, which opens up the possibility of using different settings to simulate occlusion, radiopacity, varying textures, and distractors to generate cluttered scenes. The generated synthetic data is effective in the training of deep networks. It allows better generalization on training data to deal with domain adaptation in the real world. The extensive set of experiments in this paper provides evidence for the efficacy of synthetic datasets over human-annotated datasets for automated X-ray security screening. The proposed approach outperforms the state-of-the-art approach for a diverse threat object dataset on mean Average Precision (mAP) of region-based detectors and classification/regression-based detectors.


[:en]Publisher:, Link>

ABSTRACT

Techniques for presenting objects spatially via density maps have been thoroughly studied, but there is lack of research on how to display this information in the presence of several classes, i.e., multiclass density maps. Moreover, there is even less research on how to design an interactive visualization for comparison tasks on multiclass density maps. One application domain which requires this type of visualization for comparison tasks is crime analytics, and the lack of research in this area results in ineffective visual designs. To fill this gap, we study four types of techniques to compare multiclass density maps, using car theft data. The interactive techniques studied are swipe, translucent overlay, magic lens, and juxtaposition. The results of a user study (N=32) indicate that juxtaposition yields the worst performance to compare distributions, whereas swipe and magic lens perform the best in terms of time needed to complete the experiment. Our research provides empirical evidence on how to design interactive idioms for multiclass density spatial data, and it opens a line of research for other domains and visual tasks.

[:]

Publisher:, Link>

ABSTRACT

In this chapter, we will cover known classifiers that can be used in X-ray testing. Several examples will be presented using Python. The reader can easily modify the proposed implementations in order to test different classification strategies. We will then present how to estimate the accuracy of a classifier using hold-out, cross-validation and leave-one-out. Finally, we will present an example that involves all steps of a pattern recognition problem, i.e., feature extraction, feature selection, classifier’s design, and evaluation. We will thus propose a general framework to design a computer vision system in order to select—automatically—from a large set of features and a bank of classifiers, those features and classifiers that can achieve the highest performance.

 

agencia nacional de investigación y desarrollo
Edificio de Innovación UC, Piso 2
Vicuña Mackenna 4860
Macul, Chile