资源

实习生系列:自学工程师符合阿特拉斯核心 - 遇见Ojima亚伯拉罕

Ojima Abraham是富兰克林&Marshall学院的初级初级初中,他们是我们纽约市办事处的软件工程实习生。亚博科技彩票yabogame一位自学开发者,Ojima为他在阿特拉斯核心团队的工作中带来了独特的视角,在那里他一直感谢找到有意义的支持,相应的工作和持久的友谊。在这次采访中,您会听到这些计划的人和文化是完美的合适。Alex Wilson:嘿Ojima!很高兴认识你。我听说你是一个自学式工程师,声音很令人印象深刻。你能告诉我更多关于这个吗?Ojima亚伯拉罕:这个过程既令人兴奋,令人生畏;由于当时我没有个人电脑,我不得不用我的手机来学习和练习编码,但它很令人兴奋,因为我对给电脑提供了一些指令并看到计算机执行这些指令的想法很令人兴奋。后来,我能够要求我的朋友使用他们的计算机进行练习,同时我教他们如何编写基本的HTML和CSS代码才能返回。 AW: Wow—that’s fantastic! What ended up bringing you to MongoDB? OA: I was just looking up summer internships and found MongoDB as one of the options. But I decided to intern at MongoDB because of my interview experience. One of the things that was emphasized the most during the interviews was how interns get to do work that makes it to production which I thought was very exciting; I didn't want to spend my Summer working on some "intern project" that was going to be thrown away at the end of the summer. I also really liked how supportive and positive all my interviewers and recruiters were. AW: Absolutely. I’ve got to agree there, the Campus Recruiting team has some awesome people. What team have you been working with this summer? OA: I am currently interning on the Atlas Core (Atlas 1, 2, 3) team. MongoDB Atlas is a database-as-a-service that enables you to build applications and scale faster. Atlas Core 1 has really enabled me to work on very interesting, challenging, and useful projects and that has been one of the highlights of my internship experience. In the simplest way possible: I am currently working on a new feature that will allow users to add a new collection type that would automatically organize itself in buckets, making it easy to be queried. This project has really challenged me to get comfortable with being uncomfortable. I've been able to push myself to learn more about Online Archive and the components of our project. AW: What’s the most interesting thing you’ve learned this summer? OA: Normally, I would want to list one thing that's related to developing my technical skills, but I feel like the most interesting thing I've learned this summer is how the different roles in a tech company work on a technical project from inception to production. I've been able to learn the roles that Product Managers, Engineers, Project Managers, Technical writers, and others play into making a product successful; I've been able to learn about the different areas of agreement and compromise and how strong communication is important among these different roles in a tech company. AW: Have you gotten support along the way? OA: My mentors have been the most supportive people I know! I've learned so much from them, gained so much support from them and feel like I have been able to make lifelong friends with my mentors. They are always available to answer any of my questions, have been very patient while helping me learn the things that I don't know, they have given me ownership of my work and have made sure I have never felt lost in this internship! The other interns on my team have been so supportive, providing me with great feedback and support. I genuinely feel like it was the perfect fit! AW: I’m so glad! More broadly speaking, would you say this supportiveness is a central part of MongoDB’s culture? OA: I would describe the culture here at MongoDB as supportive, positive, uplifting, inclusive, and caring. Everyone is willing to help you, answer your questions, push you to become your best self while making you appreciate your own individual strengths and celebrate the diversity of thought and experience that everyone brings to the company. AW: And have your self-taught roots influenced your experience here at all? OA: I feel like because of the initial challenge I faced while learning to code, it removed the fear of stepping into unfamiliar territory at MongoDB. I'm not afraid to pick up challenging and unfamiliar tasks/tickets because my mindset is always "I'll figure it out somehow, just like I figured it out when I first started learning to code." AW: That’s a fantastic mindset. Thanks so much for taking the time to share your experience, Ojima. I just have one final question: what’s your favorite part about being part of the MongoDB community? OA: This might sound cliché, but hands-down the people. It's just a positive environment where I have found my own people and have never felt out-of-place at all. Everyone seems happy to be here and that infectious happiness is spread around everyday. P.S. We are excited to announce we’ll be hosting two virtual summits for students this summer: our inaugural Make It Matter Summit (Wednesday, August 25 — RSVP here ) and our fourth annual Women in Computer Science “WiCS” Summit (Wednesday, September 1 — RSVP here ). Each event will include technical presentations, professional development, and networking opportunities for first- and second-year undergraduates. Hope to see you there!

Apache Kafka的Oracle到MongoDB的数亚博贵宾会贴吧据移动很容易

在数据库世界中,更改数据捕获功能已存在多年。CDC可以通过插入,更新和删除数据来侦听数据库的更改,并将这些事件与其他数据库系统发送到ETL,复制和数据库迁移等各种场景中。通过利用Apache Kafka,Confluent Oracle CDC连接器和Apache Kafka的MongoDB连接器,您可以轻松地将数据库亚博贵宾会贴吧从Oracle流入MongoDB。在此帖子中,我们将通过Oracle将数据传递给MongoDB,为您提供一步一步的配置,让您轻亚博贵宾会贴吧松重复使用,调整和探索功能。在高级别,我们将在一个自包含的Docker撰写环境中配置上面的参考图像,其中包括以下内容:Oracle数据库MongoDB Apache Kafka Confluent KSQL这些容器将在本地网络中运行所有桥接器,以便您可以播放亚博贵宾会贴吧他们来自您当地的Mac或PC。查看GitHub存储库以下载完整示例。准备Oracle Docker Image如果您有现有的Oracle数据库,请从Docker-Compose文件中删除“数据库”部分。如果您尚未拥有Oracle数据库,则可以从Docker Hub中提取Oracle数据库企业版。您需要接受Oracle条款和条件,然后通过Docker登录登录您的Docker帐户然后Docker Pull Store / Oracle / Database-Enterprise:12.2.0.1-Slim在本地下载图像。 Launching the docker environment The docker-compose file will launch the following: Apache Kafka including Zookeeper, REST API, Schema Registry, KSQL Apache Kafka Connect MongoDB Connector for Apache Kafka Confluent Oracle CDC Connector Oracle Database Enterprise The complete sample code is available from a GitHub repository . To launch the environment, make sure you have your Oracle environment ready and then git clone the repo and build the following: docker-compose up -d --build Once the compose file finishes you will need to configure your Oracle environment to be used by the Confluent CDC Connector. Step 1: Connect to your Oracle instance If you are running Oracle within the docker environment, you can use docker exec as follows: docker exec -it oracle bash -c "source /home/oracle/.bashrc; sqlplus /nolog " connect / as sysdba Step 2: Configure Oracle for CDC Connector First, check if the database is in archive log mode. select log_mode from v$database; If the mode is not “ARCHIVELOG”, perform the following: SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; Verify the archive mode: select log_mode from v$database The LOG_MODE should now be, “ARCHIVELOG”. Next, enable supplemental logging for all columns ALTER SESSION SET CONTAINER=cdb$root; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; The following should be run on the Oracle CDB: CREATE ROLE C##CDC_PRIVS; GRANT CREATE SESSION, EXECUTE_CATALOG_ROLE, SELECT ANY TRANSACTION, SELECT ANY DICTIONARY TO C##CDC_PRIVS; GRANT SELECT ON SYSTEM.LOGMNR_COL$ TO C##CDC_PRIVS; GRANT SELECT ON SYSTEM.LOGMNR_OBJ$ TO C##CDC_PRIVS; GRANT SELECT ON SYSTEM.LOGMNR_USER$ TO C##CDC_PRIVS; GRANT SELECT ON SYSTEM.LOGMNR_UID$ TO C##CDC_PRIVS; CREATE USER C##myuser IDENTIFIED BY password CONTAINER=ALL; GRANT C##CDC_PRIVS TO C##myuser CONTAINER=ALL; ALTER USER C##myuser QUOTA UNLIMITED ON sysaux; ALTER USER C##myuser SET CONTAINER_DATA = (CDB$ROOT, ORCLPDB1) CONTAINER=CURRENT; ALTER SESSION SET CONTAINER=CDB$ROOT; GRANT CREATE SESSION, ALTER SESSION, SET CONTAINER, LOGMINING, EXECUTE_CATALOG_ROLE TO C##myuser CONTAINER=ALL; GRANT SELECT ON GV_$DATABASE TO C##myuser CONTAINER=ALL; GRANT SELECT ON V_$LOGMNR_CONTENTS TO C##myuser CONTAINER=ALL; GRANT SELECT ON GV_$ARCHIVED_LOG TO C##myuser CONTAINER=ALL; GRANT CONNECT TO C##myuser CONTAINER=ALL; GRANT CREATE TABLE TO C##myuser CONTAINER=ALL; GRANT CREATE SEQUENCE TO C##myuser CONTAINER=ALL; GRANT CREATE TRIGGER TO C##myuser CONTAINER=ALL; ALTER SESSION SET CONTAINER=cdb$root; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; GRANT FLASHBACK ANY TABLE TO C##myuser; GRANT FLASHBACK ANY TABLE TO C##myuser container=all; Next, create some objects CREATE TABLE C##MYUSER.emp ( i INTEGER GENERATED BY DEFAULT AS IDENTITY, name VARCHAR2(100), lastname VARCHAR2(100), PRIMARY KEY (i) ) tablespace sysaux; insert into C##MYUSER.emp (name, lastname) values ('Bob', 'Perez'); insert into C##MYUSER.emp (name, lastname) values ('Jane','Revuelta'); insert into C##MYUSER.emp (name, lastname) values ('Mary','Kristmas'); insert into C##MYUSER.emp (name, lastname) values ('Alice','Cambio'); commit; Step 3: Create Kafka Topic Open a new terminal/shell and connect to your kafka server as follows: docker exec -it broker /bin/bash When connected create the kafka topic : kafka-topics --create --topic SimpleOracleCDC-ORCLCDB-redo-log \ --bootstrap-server broker:9092 --replication-factor 1 \ --partitions 1 --config cleanup.policy=delete \ --config retention.ms=120960000 Step 4: Configure the Oracle CDC Connector The oracle-cdc-source.json file in the repository contains the configuration of Confluent Oracle CDC connector. To configure simply execute: curl -X POST -H "Content-Type: application/json" -d @oracle-cdc-source.json http://localhost:8083/connectors Step 5: Setup kSQL data flows within Kafka As Oracle CRUD events arrive in the Kafka topic, we will use KSQL to stream these events into a new topic for consumption by the MongoDB Connector for Apache Kafka. docker exec -it ksql-server bin/bash ksql http://127.0.0.1:8088 Enter the following commands: CREATE STREAM CDCORACLE (I DECIMAL(20,0), NAME varchar, LASTNAME varchar, op_type VARCHAR) WITH ( kafka_topic='ORCLCDB-EMP', PARTITIONS=1, REPLICAS=1, value_format='AVRO'); CREATE STREAM WRITEOP AS SELECT CAST(I AS BIGINT) as "_id", NAME , LASTNAME , OP_TYPE from CDCORACLE WHERE OP_TYPE!='D' EMIT CHANGES; CREATE STREAM DELETEOP AS SELECT CAST(I AS BIGINT) as "_id", NAME , LASTNAME , OP_TYPE from CDCORACLE WHERE OP_TYPE='D' EMIT CHANGES; To verify the steams were created: SHOW STREAMS; This command will show the following: Stream Name | Kafka Topic | Format ------------------------------------ CDCORACLE | ORCLCDB-EMP | AVRO DELETEOP | DELETEOP | AVRO WRITEOP | WRITEOP | AVRO ------------------------------------ Step 6: Configure MongoDB Sink The following is the configuration for the MongoDB Connector for Apache Kafka: { "name": "Oracle", "config": { "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector", "topics": "WRITEOP", "connection.uri": "mongodb://mongo1", "writemodel.strategy": "com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneBusinessKeyTimestampStrategy", "database": "kafka", "collection": "oracle", "document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy", "document.id.strategy.overwrite.existing": "true", "document.id.strategy.partial.value.projection.type": "allowlist", "document.id.strategy.partial.value.projection.list": "_id", "errors.log.include.messages": true, "errors.deadletterqueue.context.headers.enable": true, "value.converter":"io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url":"http://schema-registry:8081", "key.converter":"org.apache.kafka.connect.storage.StringConverter" } } In this example, this sink process consumes records from the WRITEOP topic and saves the data to MongoDB. The write model, UpdateOneBusinessKeyTimestampStrategy, performs an upsert operation using the filter defined on PartialValueStrategy property which in this example is the "_id" field. For your convenience, this configuration script is written in the mongodb-sink.json file in the repository. To configure execute: curl -X POST -H "Content-Type: application/json" -d @mongodb-sink.json http://localhost:8083/connectors Delete events are written in the DELETEOP topic and are sinked to MongoDB with the following sink configuration: { "name": "Oracle-Delete", "config": { "connector.class": "com.mongodb.kafka.connect.MongoSinkConnector", "topics": "DELETEOP", "connection.uri": "mongodb://mongo1”, "writemodel.strategy": "com.mongodb.kafka.connect.sink.writemodel.strategy.DeleteOneBusinessKeyStrategy", "database": "kafka", "collection": "oracle", "document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy", "document.id.strategy.overwrite.existing": "true", "document.id.strategy.partial.value.projection.type": "allowlist", "document.id.strategy.partial.value.projection.list": "_id", "errors.log.include.messages": true, "errors.deadletterqueue.context.headers.enable": true, "value.converter":"io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url":"http://schema-registry:8081" } } curl -X POST -H "Content-Type: application/json" -d @mongodb-sink-delete.json http://localhost:8083/connectors This sink process uses the DeleteOneBusinessKeyStrategy writemdoel strategy . In this configuration, the sink reads from the DELETEOP topic and deletes documents in MongoDB based upon the filter defined on PartialValueStrategy property. In this example that filter is the “_id” field. Step 7: Write data to Oracle Now that your environment is setup and configured, return to the Oracle database and insert the following data: insert into C##MYUSER.emp (name, lastname) values ('Juan','Soto'); insert into C##MYUSER.emp (name, lastname) values ('Robert','Walters'); insert into C##MYUSER.emp (name, lastname) values ('Ruben','Trigo'); commit; Next, notice the data as it arrived in MongoDB by accessing the MongoDB shell. docker exec -it mongo1 /bin/mongo The inserted data will now be available in MongoDB. If we update the data in Oracle e.g. UPDATE C##MYUSER.emp SET name=’Rob’ WHERE name=’Robert’; COMMIT;\ The document will be updated in MongoDB as: { "_id" : NumberLong(11), "LASTNAME" : "Walters", "NAME" : "Rob", "OP_TYPE" : "U", "_insertedTS" : ISODate("2021-07-27T10:25:08.867Z"), "_modifiedTS" : ISODate("2021-07-27T10:25:08.867Z") } If we delete the data in Oracle e.g. DELETE FROM C##MYUSER.emp WHERE name=’Rob’; COMMIT;. The documents with name=’Rob’ will no longer be in MongoDB. Note that it may take a few seconds for the propagation from Oracle to MongoDB. Many possibilities In this post we performed a basic setup of moving data from Oracle to MongoDB via Apache Kafka and the Confluent Oracle CDC Connector and MongoDB Connector for Apache Kafka. While this example is fairly simple, you can add more complex transformations using KSQL and integrate other data sources within your Kafka environment making a production ready ETL or streaming environment with best of breed solutions. Resources How to Get Started with MongoDB Atlas and Confluent Cloud Announcing the MongoDB Atlas Sink and Source Connectors in Confluent Cloud Making your Life Easier with MongoDB and Kafka Streaming Time-Series Data Using Apache Kafka and MongoDB

今天和明天驱动竞争优势的5大数据趋势

欧洲领先的分析师CloudFlight的最新市场研究确定了本年度的12个主要技术趋势。趋势发现云采用的激进转变,并随着人们,社会,经济和环境的加速度,对冠状病毒大流行都反应。在最近的网络研讨会上,博士。Stefan Ried(CloudFlight)和Mat保持(MongoDB)共亚博贵宾会贴吧享关键行业见解,并详细探讨了最普遍的趋势。会议发现,随着技术创新的需求,公司的竞争优势越来越多地与其在最重要的资产围绕其建立软件:数据。亚博科技彩票yabogame在这篇文章中,Stefan博士击败了这五个关键趋势,并分析了企业如何推动数据创新以保持领域。垫保留然后提供实用的后续步骤,以便在云中越来越多地管理数据。趋势1数据成为差异化因素 - 即使超出软件,许多初创公司扰乱了基于软件的创新行业的现任者。亚博科技彩票yabogame所有的,非数字本地企业赶上了。现在数据变得比软件算法更重要。亚博科技彩票yabogame Here’s an example: Imagine a traditional automotive company. The business could purchase components and software from a supplier to implement autonomous driving in its cars, but without enough learning data out of every region its cars wouldn’t drive reliably. In this case — and many more — the automotive firm cannot just buy a software competitive advantage off the shelf. Instead, it must build that advantage — and build it using data. It’s why data is quickly becoming the differentiator in all industries and why delivering a modern customer experience is increasingly reliant on this underlying infrastructure. Software Stack Eruption (Source: Cloudflight 2020) The above image illustrates just how the tech stack is evolving. Data quality is quickly becoming the outstanding differentiator compared to software algorithms. That’s why we consider the access, ownership, and quality of data to be the mountain of innovation in this decade and moving forward. Trend 2 Europe embraces various cloud scenarios Cloud adoption in Europe has always been behind that of the United States. One reason is obvious data sovereignty and compliance concerns. It would be an intriguing thought experiment to reflect on how the U.S. public cloud adoption would have developed over the past 10 years if the only strong and innovative providers were European or even Chinese companies. Europe, however, is now at an important inflection point. Global hyperscalers finally addressed these national privacy issues. Platform service providers, including MongoDB with MongoDB Atlas , have significantly increased support for these privacy requirements with technical features such as client-side-encryption and operational SLAs. This achievement enables enterprises and even public government agencies across Europe to embrace all three basic types of cloud scenarios. Lift and shift , moving existing legacy workloads without any change to new IaaS landscapes in the cloud. Modernization and decomposing existing application stacks into cloud-native services such as a DBaaS. Modernized workloads could leverage the public cloud PaaS stacks much better than monolithic legacy stacks. The new development of cloud-native applications and building modern applications with less code and more orchestration of many PaaS services. Trend 3 Hybrid-cloud is the dominant cloud choice and multicloud will come next Nearly 50 percent of participants in our recent webinar said hybrid-cloud is their current major deployment model. These organizations use different public and private clouds for different workloads. Just 20 percent of the attendees still restrict activities to a single cloud provider. Another equally sized group claimed the exact opposite approach to multicloud environments,where a single workload may use a mixture of cloud sources or may be developed on different providers to reach multiple regions. See below. Embracing the Cloud webinar poll results (June 2021) The increasing adoption of these real multicloud scenarios is yet another major trend we will see for many years. Less experienced customers may be afraid of the complexity of using multiple cloud providers, but independent vendors offer the management of a full-service domain across multiple providers. MongoDB Atlas offers this platform across AWS, Azure, and GCP, and paves the road for real multicloud adoption and innovation. Trend 4 Cloud-native is taking off with innovative enterprises In many client engagements, Cloudflight sees a strong correlation between new business models driven by digital products and cloud-native architectures. Real innovation happens when differentiated business logic meets the orchestration of a PaaS offering. That’s why car OEMs do not employ packaged asset-life-cycle-management systems but instead develop their own digital twins for the emerging fleet of millions of digitized vehicles. These PaaS architectures follow an API-first and service-oriented paradigm leveraging a lot of open-source software. Most of this open-source software is commercially managed by hyperscalers and their partner vendors to make it accessible and highly available without deep knowledge of the service itself. The approach provides very fast productive operations of new digital products. If compliance requires it, however, customers may operate the same open-source services on their own again. Once your product becomes extremely successful and you’re dealing with data volume far beyond one petabyte, you may also reconsider self-operations for cost reasons. This is because there is no operational lock-in for a specific service provider and you may become an “operations pro” on your own. Trend 5 Digital twins become cloud drivers in many industries Many people still connect the term “cloud computing” to virtualized compute-and-storage services. Yet cloud computing is far more. PaaS levels became increasingly attractive with prepackaged cloud-native services. It has been on the market for many years, but the perception and adoption — especially in Europe — is still behind its potential. Based on today’s PaaS services, cloud providers and their partners are already extending their offers to higher levels. The space of digital twins along with AI are clear opportunities here. There are offerings for each of the three major areas of digital twins: In modern automated manufacturing (industry 4.0), production twins are created when a product is ordered and they make production-relevant information (such as individual configurations) available to all manufacturing steps along the supply chain. Once the final product is delivered, the requirements for interactions and data models change significantly for these post-production-life-cycle twins . Production, post-production and simulation-twin (Source: Cloudflight ) Finally, simulation twins are a smart approach to test machine learning applications. Take, for example, the autonomous driving challenge: Instead of testing the ongoing iterations of driving “knowledge” on a physical vehicle, running virtual simulation twins is much preferred and safer than experiments in real traffic situations. Beyond manufacturing and automotive, there are many verticals in which digital twins make sense. Health care is a clear and obvious example in which real-life experiments may not always be the best approach. Success here depends mostly on the cooperation between technology vendors and the industry-specific digital twin ecosystems . In Summary Each of the five trends discussed center on or closely relate to cloud-native data management. A traditional database may be able to run for specific purposes on cloud infrastructure, but only a modern cloud-native application data platform is able to serve both the migration of legacy applications and the development of multiple new cloud-native applications. Next Steps Where and how can companies get started on a path to using data as a driver of competitive advantage? Mat Keep, Senior Director of Products at MongoDB, takes us through how to best embrace this journey. As companies move to embrace the cloud, they face an important choice. Do they: Lift and shift: move existing applications to run in the cloud on the same architecture and technologies used on premises. Transform (modernize): rearchitect applications to take advantage of new cloud-native capabilities such as elasticity, redundancy, global distribution, and managed services. Lift and shift is often seen as an easier and more predictable path since it reuses a lot of the technology you use on premises — albeit now running in the cloud — presenting both the lowest business risk and least internal cultural and organizational resistance. It can be the right path in some circumstances, but we need to define what those circumstances are. For your most critical applications, lift and shift rarely helps you move the business forward. You will be unable to fully exploit new cloud-native capabilities that enable your business to build, test, and adapt faster. The reality we all face is that every application is different, so there is no simple or single “right” answer to choosing lift and shift versus transformation. In some cases, lift and shift can be the right first step, helping your teams gain familiarity with operating in the cloud before embarking on a fuller transformation as they see everything the cloud has to offer. This can also be a risk, however, if your teams believe they are done with the cloud journey and don’t then progress beyond that first step. To help business and technology leaders make the right decisions as they embrace the cloud, we have created an Executive Perspective for Lift and Shift Versus Transformation . The perspective presents best practices that can help prioritize your efforts and mobilize your teams. By working with more than 25,000 customers, including more than 50 percent of the Fortune 100, the paper shares the evaluation frameworks we have built that can be used to navigate the right path for your business, along with the cultural transformations your teams need to make along the way. Embracing the Cloud: Assessment Framework Toyota Material Handling in Northern Europe has recently undergone its own cloud journey. As the team evolved its offerings for industry 4.0, it worked with MongoDB as part of its transformation. Moving from monolithic applications and aging relational databases running on premises to microservices deployed on a multicloud platform, the company completed its migration in just four months. It reduced costs by more than 60 percent while delivering an agile, resilient platform to power its smart factory business growth. To learn more about cloud trends and the role of data in your cloud journey, tune in to the on-demand webinar replay .

实习生系列:使遥控作品有意义(和有趣!) - 遇见索菲亚李

Sophia Li是Waterloo大学的一个崛起的高级,目前正作为软件工程实习生正常工作。亚博科技彩票yabogame今年夏天,她一直是Devhub平台团队的一部分,在那里她正在努力建造MongoDB的成长开发者枢纽。亚博贵宾会贴吧尽管从加拿大远程工作,但她很高兴能够参与实际的工作,并找到她的专业社区的有意义的支持。在这次采访中,您将获得更多关于Sophia如何让她的远程实习经历记住的信息。亚历克斯威尔逊:嘿索菲亚,很高兴再次见到你,因为我们上次在实习生学习和发展事件中发言 - 我很高兴听到你的夏天如何!首先,你能告诉我一点关于将你带到MongoDB的东西吗?亚博贵宾会贴吧索菲亚李:我决定在MongoDB实习,原因很多。亚博贵宾会贴吧首先,我喜欢那里有各种各样的团队选择。从核心服务器到教育,我认为每个人都有真正的东西。我也喜欢我能做的工作类型的灵活性。 I was able to choose between frontend, backend, and full-stack. Many engineering teams work with tools and technologies that I’ve never used before, so I was initially concerned that this would make me a weaker candidate, but that was not the case. During my interviews, I learned that teams are very open to giving interns the chance to work with new tech and are willing to teach it to them. Overall, speaking with my interviewers gave me a great sense of the company culture. MongoDB felt like a company where I could learn, grow, and thrive. AW: That’s so great to hear! I definitely agree with your take on the company culture. What team did you end up choosing? SL: I am interning on the DevHub Platform team! We work on building the Developer Hub which houses code, content, tutorials, and more to support developers that use MongoDB. It’s a relatively new team that consists of me, two full-time engineers, a product manager, and a product designer. AW: And what work have you been doing with them? SL: I am spending this summer working on a new portion of the DevHub site. Specifically, I am working on a new page that features information about MongoDB’s Community Champions program and features our current Community Champions. MongoDB Community Champions is a program initiative led by the Community Team. This program aims to strengthen our relationship with external MongoDB advocates in the developer community. The landing page is used to educate developers about this program and its eligibility. Aside from building the landing page, I am working on an application form that will allow people to apply to the program. I will also be creating a bio page for each Community Champion! Much of this work involves creating the UI and managing data. I recently created a new Community Champions API with Strapi (our CMS), and used GraphQL to query the API from the frontend. I’ve been able to work with Strapi and MongoDB on the backend, and Gatsby and React on the frontend. A cool challenge I found was implementing responsive design. This was important in order to provide a great user experience on all types of devices. This is a very fun project for me, and I love being able to touch both the backend and frontend. I have learned tons since I’ve started this! AW: Nice! That’s such meaningful work. I’m sure that finding a supportive team is especially important during your time working remotely—how has that been? SL: I think my team and mentor have done a fantastic job of setting me up for success. They have been a great help and provided me with lots of support from day one. They are so resourceful and knowledgeable, and I have been able to learn so much from them! Being pretty new to web development, the project I was given felt daunting at first. I felt like I had to learn from scratch, but my mentor made it really easy for me to do this through his guidance (shoutout to Jordan!). My mentor took the time to help me ramp up by scheduling multiple sessions to teach me certain topics, give me walkthroughs, or pair program. We have weekly 1:1s where I get to express what’s on my mind and communicate my goals. Despite working remotely, I was always able to get the help I needed. My mentor always made time to answer my questions and explain things thoroughly to help me develop a better understanding of what I was learning. I have also received valuable feedback from my mentor through code reviews which has helped me become a better engineer. AW: I’m so impressed that you’ve found this much value in your remote experience. Is there anything that you’ve learned about yourself in the process? SL: I’ve learned that remote working can make it more difficult to set boundaries because there is no physical separation between work and your personal life. As a result, I make a conscious effort to take regular breaks. Luckily, I’m always encouraged by my peers to take breaks at work. The 1:1 check-ins I have with my mentor and campus program manager are a great time for us to discuss how I’m doing and how they can support me better, and they make sure I'm never overwhelmed with work. I use my breaks to get away from my desk to eat, recharge, and spend some time in my backyard. I’ve also learned that remote working requires you to put more effort into communicating with others in order to avoid feeling isolated. But my mentor is very responsive and has made remote communication between us easy. Whenever I need help, I will hop on a call or send a Slack message to them. My team also has weekly “work periods” where we all hop on a call and do our work together which kind of mimics an office environment where we’re all at our desks. In terms of growth opportunities, I feel like working remotely has given me a higher level of independence and autonomy. I’ve been able to enhance my time management skills as I have to hold myself more accountable to complete tasks, and of course, having a fun project to do that genuinely excites me also helps. I was assigned a really interesting project which motivates me to come to work everyday! AW: Clearly, you’ve had some great professional experiences, but to close, I would love to know: have you been having fun? SL: The campus recruiting team has put on some awesome virtual intern events this summer including a Spain trivia game, escape room, and chocolate-making class! These events were super fun to attend, and I have been able to meet other interns through them as well! I am also a part of the Underrepresented Genders in Tech affinity group, and we recently had a game night which offered a really great opportunity to connect with other members of the group. In addition, I occasionally do virtual game nights and catch-ups with a group of remote interns. These social events have definitely helped make working remotely a lot less isolating and lonely. I have also been doing coffee chats with other interns and full-timers which has been a great way to make connections and get to know people on a deeper level! P.S. We are excited to announce we’ll be hosting two virtual summits for students this summer: our inaugural Make It Matter Summit (Wednesday, August 25 — RSVP here ) and our fourth annual Women in Computer Science “WiCS” Summit (Wednesday, September 1 — RSVP here ). Each event will include technical presentations, professional development, and networking opportunities for first- and second-year undergraduates. Hope to see you there!

亚博贵宾会贴吧MongoDB&Bosch:关于AIT的讨论

对于超过十年,行业的数字化改造一直专注于组成物联网(IOT)的互联网技术。随着人工智能和机器学习技术的成熟,一个新的领域已经出现,结合了这些趋势:AIoT,物联网的人工智能,它适用AI通过物联网设备收集的数据。在这些企业开拓这个空间工程和工业巨头博世,长期以来一直在物联网中的佼佼者。这一举措AIoT已经允许博世打造的智能产品,要么具有智能内置的,或者在其后端“群智能”,允许用来提高产品数据的收集。四月2021年马克·波特,首席技术官MongoDB的,和德克Slama亚博贵宾会贴吧,共同创新和IT /物联网联盟在博世的副总裁,坐下来讨论AIoT。他们的谈话看见他们在什么的MongoDB和博世围绕AIoT工作的触摸,并在那里他们看亚博贵宾会贴吧到AIoT标题的未来。博世的新焦点AIoT增强了他们一个灵活的,现代的数据平台,如MongoDB的需要。亚博贵宾会贴吧的IoT设备中收集大量的数据;博世增加传感器和新的数据类型,以自己的产品,MongoDB的允许它,而不必经过重新设计架构时,他们需要实现一个改变他们的产品很亚博贵宾会贴吧快适应。由于他们的努力,进步AIoT技术的一部分,博世等公司最近成立了AIoT用户组,一个主动公开给任何人。 The group’s goal is to bring end users working on AIoT business and use cases together with technology experts to share best practices around AIoT solutions. This co-creation approach allows for the rapid utilization of best practices to try out and develop new ideas and technologies. Porter and Slama’s conversation covered many AIoT topics — and a glimpse at the technology’s next steps. For instance, Slama wants to see agility added to AIoT without losing control. In AIoT, there are many features that must be perfect on day one; but there are also a lot of features where you want to continuously improve system performance, which requires an agile approach. For Mark Porter and Dirk Slama's full conversation, check out the video below!

实习生系列:寻找社区虽然拥有生产 - 符合卡罗莱纳奥夫雷贡

卡罗来纳州奥夫雷贡是Tecnológico蒙特雷墨西哥是谁在我们的纽约办公室实习上升高级。她整个夏天都在与DevRel平台/教育队伍建设测验部件为导向的docs.MongoDB.com找到。亚博贵宾会贴吧Carolina has been excited to gain exposure to start-to-finish software development cycles, and in this interview, you’ll get to hear how she has found meaningful support both in her professional circle and from her membership in MongoDB’s Underrepresented Genders in Tech (UGT) Affinity Group. Alex Wilson: Hey Caro, it’s so good to see you! Can you tell me more about how you found out about MongoDB’s internship program? Carolina Obregon: I first applied to MongoDB's Women in Computer Science Summit all the way back in 2019 and while I wasn't accepted to attend that year, I kept receiving emails from MongoDB about blogs and future openings! When the next recruiting season came, I remembered MongoDB's openings and decided to take the chance and apply to the internship program! I was really happy to even get the first interview and even happier when I received the offer. AW: Why did you choose MongoDB for your internship? CO: As a software engineer, it's very important to me to be in a company that has an engineering centric culture so that I'm working with exciting technological challenges, a modern stack, and teammates that I can learn a lot from. Additionally, I really wanted to be part of a company that really cares about their employees, diversity, and company culture, and throughout the recruiting process and talking with past interns and current employees, I found that MongoDB really checked all the boxes I was looking for. AW: That’s great! What has your day-to-day looked like since you’ve been here? CO: The team I work on is part of the greater Developer Relation Platform/Education organization, which is key for all our developers to learn MongoDB from basics to advanced topics required to run MongoDB in production systems. My team specifically is responsible for developing the systems which run MongoDB documentation's website (docs.mongodb.com). Documentation is crucial for MongoDB since the core of the product is essentially targeted at developers. My intern project is creating a quiz widget that is going to be displayed throughout the docs.mongodb.com guides and will ask our users key multiple choice questions about the content that they’re currently reading on. It's exciting that I've gotten to work on this project from start to finish and really experience how the software development cycle works in the industry, from working with the product manager to the product designer and receiving support from the other engineers on my team to make this all happen. AW: Awesome! Anything particularly interesting that you’ve learned? CO: I've gotten to work on a lot with Javascript's React. I had done previous personal projects using this framework, but it has been very interesting getting to work on it in a real life production environment and receiving guidance and feedback from my teammates on how to keep improving my skills. AW: It must be so gratifying to do work with such tangible results and have clear growth opportunities—and that sounds fascinating. Has your team given you much support in this work? CO: They were really good at ramping me up and making me feel comfortable with the work and tech stack from the beginning. Giving me small tasks to start off and after I was comfortable enough, assigning me my own project to develop on my own. The whole team has been super attentive and helpful throughout the whole summer, and making sure that I'm constantly challenged, learning, and getting help in my work! AW: Fantastic! Is there anywhere else you’ve been finding support? CO: Getting the chance to be part of UGT (Underrepresented Genders in Tech) was a very fulfilling and meaningful experience. The Campus Team assigned a UGT mentor for each of us and prepared many events throughout the summer that ranged from talking about our personal experiences to fun game nights. I really enjoyed getting to know other interns and full timers who are also part of the affinity group. In the short time that I spent here, I’ve found MongoDB to be very supportive of underrepresented genders. Taking in the fact that women make up 46% of the intern class, I always felt that I was in a very comfortable and open work environment everytime I came to the office. AW: It’s been so nice to hear that you’ve had such a meaningful summer. There’s one last thing I’d love to hear: you talked about the company culture as part of the reason why you came to MongoDB—what have you found out about the culture since you’ve been here? CO: One of MongoDB's core values is Build Together, and I think that the company culture really stems from that. It is a very collaborative, friendly work environment where your teammates and co-workers legitimately care about your wellbeing both personally and professionally. P.S. We are excited to announce we’ll be hosting two virtual summits for students this summer: our inaugural Make It Matter Summit (Wednesday, August 25 — RSVP here ) and our fourth annual Women in Computer Science “WiCS” Summit (Wednesday, September 1 — RSVP here ). Each event will include technical presentations, professional development, and networking opportunities for first- and second-year undergraduates. Hope to see you there!

在MongoDB图表中控制你的颜色亚博贵宾会贴吧

颜色是你想要用任何形式的数据可视化传达的故事的组成部分。随着MongoDB图表的最新发布,我们增加了更多的控制亚博贵宾会贴吧,你可以如何分配颜色到你的图表!以前,系列的颜色分配总是基于该图表中的系列顺序。然而,我们可能希望根据系列值给图表上色。在一些基本情况下,这些不同的策略证明是有用的,包括:“夏”及“冬”系列分别以“红”及“蓝”着色,以象征季节。如果上面的例子还不够充分,我们将使用Olympics数据集创建一些漂亮的图表,以充分理解新功能的功能。我们将从一个基本的单系列图表开始。这些图表通常有一个编码到x轴和y轴的单一字段,并将显示图表的单一颜色。在这些图表中,我们现在显示了一个单一的颜色样本供您编辑。 Simple, right? Multi-series charts For more complicated charts with multiple series, we may want to colour the series based on the encoded field itself. These charts are created when multiple fields are encoded to an aggregation channel where the field key is used to build the multi-series chart. In the above chart, I have a medal tally of the top 10 countries based on medal count. The chart itself is fine, but we could improve this chart with some useful colouring! A notable colour scheme we could apply to this chart is assigning each series to the colour of the medal. Inside the Color Palette customisation option, you will see that each encoded field is now listed based on the order that they were encoded in. With the colour scheme set to the medal colour, the chart will be a lot easier to convey the original information. Colours assigned to these channels will always have the same colour assigned and will ignore the ordering of these fields. Assigning chart colours to string data The final chart that we want to create, involves a chart where the data itself is a String type. With these chart types, the Color Palette will provide options to toggle between the two different colour assignment strategies where: 'By Order' will allow you to assign colours by the ordering of the series 'By Series' lets you customise the colour for a specific series value To help streamline the process of assigning colours in the above chart, in the ‘By Order’ menu, I can choose to assign colours based on the value order of the Discipline that appears in the chart. This may be useful if we don't care what the colours are that represent each Discipline. Alternatively, we could assign colours using 'By Series' so that we can be assured that I can represent the Disciplines with an associated colour. Now that we have created all of our charts using the different ways we can assign colours, we can be confident that the colours in our data visualisations are consistent throughout our dashboard. Want to start colouring your charts today? You can start now for free by signing up for MongoDB Atlas , deploying a free tier cluster and activating Charts. Have an idea on how we can make MongoDB Charts better? Feel free to leave an idea at the MongoDB Feedback Engine .